[R] New version of R-WinEdt (for Windows)

2003-10-28 Thread Uwe Ligges
There was a user request to announce this new version (1.6-0) of
R-WinEdt (actually, the request was to announce version 1.5-1). It is 
propagating through CRAN these days and (is|will be) available at
 yourCRANmirror/contrib/extra/winedt/

For those who have not already noticed the changes since R-WinEdt 1.4-x,
I'd like to summarize the changes below. The most exciting one is the
new automatical installation procedure (since version 1.5-0 R-WinEdt
installs like any other R package on Windows, so knowledge about Windows
*.bat files and shortcuts etc. is no longer required). There are also
changes for syntax highlighting and an automatical detection of the
window mode RGUI is running in.
Some changes are due to user requests, other changes have been proposed
by myself in the corresponding paper of the DSC 2001 proceedings (not
all are implemented). I'd like to thank all those users who helped to
improve the Plug-In with their bug reports, feature requests, or 
"simply" some questions on the usage.

Changes in RWinEdt_1.6-0:
-
- Detects whether RGui has been started in --sdi or --mdi mode
- Added features to move (indent) blocks of code, and insert
  comments (#) blockwise, as known from the regular WinEdt mode
  for LaTeX et al. editing
- Added version control system that knows when the R-WinEdt
  system has to be updated.
- R-WinEdt no longer sets WinEdt as the default pager.
- Fixed problem that startWinEdt() started WinEdt in its "-V"
  mode.
Changes in RWinEdt_1.5-1:
-
- Index of (exported) object names updated (R-1.8.0 related).
- Syntax highlighting
  - for Namespace operators (::, :::) added (R-1.8.0 related).
  - for `backtick names` added (R-1.8.0 related).
  - for 'single quotes' added.
  - for old assignment operator "_" removed (R-1.8.0 related).
  - for "double quotes' fixed.
  - for assignment operator "=" fixed.
  - for comparisons (e.g. "==") fixed.
- Documentation related to the broken (doesn't seem to be fixed soon)
  automatical installation of the SWinRegistry package added.
- More carefully loading SWinRegistry using require().
Changes in RWinEdt_1.5-0:
-
- Now providing R-WinEdt as a package called RWinEdt:
  - introducing a new installation procedure,
installed like an R package, called like an R package (library())
- requires Omegahat package SWinRegistry
  - providing an R Menu to call R-WinEdt
- minor fixes for Windows XP timings
- minor fixes for Syntax highlighting ("=", "_", ...)
Uwe Ligges

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] random number generation

2003-10-28 Thread nmi13
Hi every one,

I am trying to generate a normally distributed random variable with the 
following descriptive statistics,

min=1, max=99, variance=125, mean=38.32, 1st quartile=38, median=40, 3rd 
quartile=40, skewness=-0.274.

I know the "rnorm" will allow me to simulate random numbers with mean 38.32 
and Sd=11.18(sqrt(125)). But I need to have the above mentioned descriptive 
statistics for the data that I generate.

I would be thankful to anyone who can help me with this problem.

Regards
Murthy

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] formula parsing, using parts ...

2003-10-28 Thread Uwe Ligges
Russell Senior wrote:

I am writing a little abstraction for a series of tests.  For example,
I am running an anova and kruskal.test on a one-factor model.  That
isn't a particular problem, I have an interface like:
 my.function <- function(model,data) {
   print(deparse(substitute(data)))
   a <- anova(lm(formula,data))
   print(a)
   if(a$"Pr(>F)"[1] < 0.05) {
  pairwise.t.test(???)
   }
   b <- kruskal.test(formula,data)
   print(b)
   if ...
 }
I want to run each test, then depending on the resulting p-value, run
pairwise tests.  I am getting into trouble where I put the ??? above.
The pairwise.t.test has a different interface, that seems to want me
to dismember the formula into constituent parts to feed in.  The other
alternative is to give my.function the constituent parts and let it
build the model.  I haven't figured out how to do either one.  Can
someone give me some pointers?
See ?formula and its "See Also" Section on how to do formula 
manipulation. There's also an example on how to construct a formula.

Uwe Ligges

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Octave scale transformation

2003-10-28 Thread Dr Andrew Wilson
Is it possible to convert a data table in "R" to an octave scale (as
done, for example, in the MVSP multivariate stats program)?

I work with tables of word or category frequencies across a number of
texts or text segments, e.g.:

Token   sect_1  sect_2  sect_3  sect_4  sect_5  sect_6  sect_7  sect_8
sect_9  sect_10 sect_11 sect_12 sect_13 sect_14 sect_15 sect_16 sect_17
sect_18 sect_19 sect_20 sect_21 sect_22 sect_23
advance 0   0   0   0   0   1   0   0   0
0   4   0   0   0   2   0   0   0   0
0   0   0   0
aed 0   1   3   0   0   1   0   0   0
0   4   0   0   0   0   4   2   3   0
0   0   1   1
agree   0   0   0   1   0   0   0   0   0
0   0   1   0   0   0   0   0   0   0
0   0   1   0
antibiotics 0   0   0   0   0   0   0   0
0   0   0   3   1   0   0   0   0   0
0   1   0   0   0

However, the texts/segments are typically of different lengths and the
analysis program doesn't calculate proportional frequencies.  (NB: It also
doesn't select *all* words in the texts, so it is not possible to
calculate true percentages "after the fact".) 

What I want to do is to transform the data before calculating distances
and carrying out clustering or multidimensional scaling, so that the
differences in text/segment size don't (heavily) bias the results.

Many thanks,
Andrew Wilson

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] formula parsing, using parts ...

2003-10-28 Thread Russell Senior
> "Uwe" == Uwe Ligges <[EMAIL PROTECTED]> writes:

Russell> I am writing a little abstraction for a series of tests.  For
Russell> example, I am running an anova and kruskal.test on a
Russell> one-factor model.  That isn't a particular problem, I have an
Russell> interface like: my.function <- function(model,data) {
Russell> print(deparse(substitute(data))) a <- anova(lm(formula,data))
Russell> print(a) if(a$"Pr(>F)"[1] < 0.05) { pairwise.t.test(???)  } b
Russell> <- kruskal.test(formula,data) print(b) if ...  } I want to
Russell> run each test, then depending on the resulting p-value, run
Russell> pairwise tests.  I am getting into trouble where I put the
Russell> ??? above.  The pairwise.t.test has a different interface,
Russell> that seems to want me to dismember the formula into
Russell> constituent parts to feed in.  The other alternative is to
Russell> give my.function the constituent parts and let it build the
Russell> model.  I haven't figured out how to do either one.  Can
Russell> someone give me some pointers?

Uwe> See ?formula and its "See Also" Section on how to do formula
Uwe> manipulation. There's also an example on how to construct a
Uwe> formula.

In order to use the 'as.formula(paste(response," ~ ",factor))'
approach, response and factor seem to need to be strings (at least
they seem to if response is "log(x)" or the like).  Whereas, for
pairwise.t.test they need to be names.  What is the proper way to do
that?

-- 
Russell Senior ``I have nine fingers; you have ten.''
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] formula parsing, using parts ...

2003-10-28 Thread Russell Senior
> "Uwe" == Uwe Ligges <[EMAIL PROTECTED]> writes:

Russell> I am writing a little abstraction for a series of tests. For
Russell> example, I am running an anova and kruskal.test on a
Russell> one-factor model.  That isn't a particular problem, I have an
Russell> interface like: my.function <- function(model,data) {
Russell> print(deparse(substitute(data))) a <- anova(lm(formula,data))
Russell> print(a) if(a$"Pr(>F)"[1] < 0.05) { pairwise.t.test(???)  } b
Russell> <- kruskal.test(formula,data) print(b) if ...  } I want to
Russell> run each test, then depending on the resulting p-value, run
Russell> pairwise tests.  I am getting into trouble where I put the
Russell> ??? above.  The pairwise.t.test has a different interface,
Russell> that seems to want me to dismember the formula into
Russell> constituent parts to feed in.  The other alternative is to
Russell> give my.function the constituent parts and let it build the
Russell> model.  I haven't figured out how to do either one.  Can
Russell> someone give me some pointers?

Uwe> See ?formula and its "See Also" Section on how to do formula
Uwe> manipulation. There's also an example on how to construct a
Uwe> formula.

Russell> In order to use the 'as.formula(paste(response," ~
Russell> ",factor))' approach, response and factor seem to need to be
Russell> strings (at least they seem to if response is "log(x)" or the
Russell> like).  Whereas, for pairwise.t.test they need to be names.
Russell> What is the proper way to do that?


Uwe> In order to run pairwise.t.test() you can simply get() the values
Uwe> from objects:

Uwe> Let's change the example in ?pairwise.t.test:

Uwe>   data(airquality) 
Uwe>   attach(airquality) 
Uwe>   Month <- factor(Month, labels = month.abb[5:9]) 
Uwe>   x <- "Ozone" 
Uwe>   y <- "Month"
Uwe>   pairwise.t.test(get(x), get(y))

Suppose I want x to be "log(Ozone)"?  The get() function doesn't help
me there.

-- 
Russell Senior ``I have nine fingers; you have ten.''
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] formula parsing, using parts ...

2003-10-28 Thread Uwe Ligges
Russell Senior wrote:

"Uwe" == Uwe Ligges <[EMAIL PROTECTED]> writes:


Russell> I am writing a little abstraction for a series of tests. For
Russell> example, I am running an anova and kruskal.test on a
Russell> one-factor model.  That isn't a particular problem, I have an
Russell> interface like: my.function <- function(model,data) {
Russell> print(deparse(substitute(data))) a <- anova(lm(formula,data))
Russell> print(a) if(a$"Pr(>F)"[1] < 0.05) { pairwise.t.test(???)  } b
Russell> <- kruskal.test(formula,data) print(b) if ...  } I want to
Russell> run each test, then depending on the resulting p-value, run
Russell> pairwise tests.  I am getting into trouble where I put the
Russell> ??? above.  The pairwise.t.test has a different interface,
Russell> that seems to want me to dismember the formula into
Russell> constituent parts to feed in.  The other alternative is to
Russell> give my.function the constituent parts and let it build the
Russell> model.  I haven't figured out how to do either one.  Can
Russell> someone give me some pointers?
Uwe> See ?formula and its "See Also" Section on how to do formula
Uwe> manipulation. There's also an example on how to construct a
Uwe> formula.
Russell> In order to use the 'as.formula(paste(response," ~
Russell> ",factor))' approach, response and factor seem to need to be
Russell> strings (at least they seem to if response is "log(x)" or the
Russell> like).  Whereas, for pairwise.t.test they need to be names.
Russell> What is the proper way to do that?
Uwe> In order to run pairwise.t.test() you can simply get() the values
Uwe> from objects:
Uwe> Let's change the example in ?pairwise.t.test:

Uwe>   data(airquality) 
Uwe>   attach(airquality) 
Uwe>   Month <- factor(Month, labels = month.abb[5:9]) 
Uwe>   x <- "Ozone" 
Uwe>   y <- "Month"
Uwe>   pairwise.t.test(get(x), get(y))

Suppose I want x to be "log(Ozone)"?  The get() function doesn't help
me there.


 eval(parse(text=x))

Uwe Ligges

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] formula parsing, using parts ...

2003-10-28 Thread Russell Senior
> "Uwe" == Uwe Ligges <[EMAIL PROTECTED]> writes:

Russell> Suppose I want x to be "log(Ozone)"?  The get() function
Russell> doesn't help me there.

Uwe>   eval(parse(text=x))

Ah, that seems to have done it.  Thanks!

-- 
Russell Senior ``I have nine fingers; you have ten.''
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] ESS windows/linux

2003-10-28 Thread Christian Schulz
In front of my long intention
migrate to linux i'm trying Xemacs 
and ESS for first in Windows.
Install all and get the Rd-Menu
in Xemacs, but if i want to "Eval a region"
of R-Code i get the message
in the below window "No ESS Processes running"!

P.S.
Have anybody written a document like
"Using R-Project - from Windows2Linux" 
which describe the differencies in one
small paper. If not, perhaps i summarize
in next weeks my experience.

Not misunderstand it should not 
replace the faq's, but perhaps  
speed up migration from windows
to linux when the linux skills at
a beginner level.

many thanks and 
regards,christian

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] ESS windows/linux

2003-10-28 Thread Pfaff, Bernhard
> In front of my long intention
> migrate to linux i'm trying Xemacs 
> and ESS for first in Windows.
> Install all and get the Rd-Menu
> in Xemacs, but if i want to "Eval a region"
> of R-Code i get the message
> in the below window "No ESS Processes running"!

Have you started an R process in the first place? 

M-x R

Secondly, have your set your path in 'ess-site.el' correctly?

HTH,
Bernhard


> 
> P.S.
> Have anybody written a document like
> "Using R-Project - from Windows2Linux" 
> which describe the differencies in one
> small paper. If not, perhaps i summarize
> in next weeks my experience.
> 
> Not misunderstand it should not 
> replace the faq's, but perhaps  
> speed up migration from windows
> to linux when the linux skills at
> a beginner level.
> 
> many thanks and 
> regards,christian
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> 



The information contained herein is confidential and is intended solely for the
addressee. Access by any other party is unauthorised without the express 
written permission of the sender. If you are not the intended recipient, please 
contact the sender either via the company switchboard on +44 (0)20 7623 8000, or
via e-mail return. If you have received this e-mail in error or wish to read our
e-mail disclaimer statement and monitoring policy, please refer to 
http://www.drkw.com/disc/email/ or contact the sender.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


AW: [R] ESS windows/linux

2003-10-28 Thread Christian Schulz
many thanks, now it works !
christian


-Ursprüngliche Nachricht-
Von: Pfaff, Bernhard [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 28. Oktober 2003 12:43
An: 'Christian Schulz'; [EMAIL PROTECTED]
Betreff: RE: [R] ESS windows/linux


> In front of my long intention
> migrate to linux i'm trying Xemacs
> and ESS for first in Windows.
> Install all and get the Rd-Menu
> in Xemacs, but if i want to "Eval a region"
> of R-Code i get the message
> in the below window "No ESS Processes running"!

Have you started an R process in the first place?

M-x R

Secondly, have your set your path in 'ess-site.el' correctly?

HTH,
Bernhard


>
> P.S.
> Have anybody written a document like
> "Using R-Project - from Windows2Linux"
> which describe the differencies in one
> small paper. If not, perhaps i summarize
> in next weeks my experience.
>
> Not misunderstand it should not
> replace the faq's, but perhaps
> speed up migration from windows
> to linux when the linux skills at
> a beginner level.
>
> many thanks and
> regards,christian
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>




The information contained herein is confidential and is intended solely for
the
addressee. Access by any other party is unauthorised without the express
written permission of the sender. If you are not the intended recipient,
please
contact the sender either via the company switchboard on +44 (0)20 7623
8000, or
via e-mail return. If you have received this e-mail in error or wish to read
our
e-mail disclaimer statement and monitoring policy, please refer to
http://www.drkw.com/disc/email/ or contact the sender.



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] random number generation

2003-10-28 Thread nmi13
Hi every one,

I am trying to generate a random variable with the following descriptive 
statistics,

min=1, max=99, variance=125, mean=38.32, 1st quartile=38, median=40, 3rd
quartile=40, skewness=-0.274.

I tried with rgamma and as I cannot use rnorm, can any one please suggest me 
what distribution would give me the negative skewness. I need to have the 
above mentioned descriptive statistics for the data generated.

I would be thankful to anyone who can help me with this problem.

Regards
Murthy

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] \mathcal symbols in R?

2003-10-28 Thread Brett Presnell

Paul Murrell writes:
> 
> Michael Grottke wrote:
> > 
> > Some time ago, I discovered the possibility of using mathematical 
> > symbols for axis labels etc. In order ensure consistency between text 
> > and graphics of some paper, I would like to include the calligraphic H 
> > (obtained in LaTeX via \mathcal{H}) in several diagrams. Is there any 
> > way to do so? Is it in general possible to use further mathematical 
> > fonts like \mathbb and \mathbf in R?

If you really want to match the latex fonts exactly, you might want to
try psfrag.sty.  I used to use it regularly when putting math in S
figures was more trouble than it is now with R, and it worked very
well.  My approach was to use "meaningful" text strings in the S
graphic, so that the graphic was useful on it's own (outside of the
latex document), and then substitute for those strings using psfrag in
the latex document.

-- 
Brett Presnell
Department of Statistics
University of Florida
http://www.stat.ufl.edu/~presnell/

"We don't think that the popularity of an error makes it the truth."
   -- Richard Stallman

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] random number generation

2003-10-28 Thread Pascal A. Niklaus
You need to know the exact distribution of the random numbers you want 
to generate. For rnorm, in fact, you do not just specify the mean and 
the variance, but implicitely also that the data is normally 
distributed. Likewise, it is not sufficient to give min, max, skewness 
etc, you also need to know the distribution and then maybe you can use 
runif() as base for your code.

Pascal

nmi13 wrote:

Hi every one,

I am trying to generate a normally distributed random variable with the 
following descriptive statistics,

min=1, max=99, variance=125, mean=38.32, 1st quartile=38, median=40, 3rd 
quartile=40, skewness=-0.274.

I know the "rnorm" will allow me to simulate random numbers with mean 38.32 
and Sd=11.18(sqrt(125)). But I need to have the above mentioned descriptive 
statistics for the data that I generate.

I would be thankful to anyone who can help me with this problem.

Regards
Murthy
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] random number generation

2003-10-28 Thread Rolf Turner

Is this a homework problem?

cheers,

Rolf Turner
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] random number generation

2003-10-28 Thread Prof Brian Ripley
Is this a student exercise?  If not, please enlighten us as to the
real-world problem from which this is extracted.

Given that 50% of the probability mass lies between 38 and 40, and the 
median and 3rd quartile are both 40, this cannot be a continuous
distribution.  I would design a discrete distribution on the integers 
1, ..., 99 to meet your requirements: that is `just' a constrained 
non-linear optimization problem.

BTW, a random variable cannot have those characteristics: its distribution 
could, or a sample could and it is unclear which you mean.  The first is 
easier and so that's what I have assumed.

On Wed, 29 Oct 2003, nmi13 wrote:

> I am trying to generate a random variable with the following descriptive 
> statistics,
> 
> min=1, max=99, variance=125, mean=38.32, 1st quartile=38, median=40, 3rd
> quartile=40, skewness=-0.274.
> 
> I tried with rgamma and as I cannot use rnorm, can any one please suggest me 
> what distribution would give me the negative skewness. I need to have the 
> above mentioned descriptive statistics for the data generated.
> 
> I would be thankful to anyone who can help me with this problem.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] outer function problems

2003-10-28 Thread Spencer Graves
 I don't know that this is your problem, but I see a potential 
scoping issue:  It is not obvious to me where Dk is getting n0 and w.  
I've solved this kind of problem in the past by declaring n0 and w as 
explicit arguments to Dk and then passing them explicitly via "..." in 
"outer".  In general, I prefer to avoid accessing globals from within 
functions.  This may not help you here, but it might help in the future. 

 hope this helps.  spencer graves

Scott Norton wrote:

I'm pulling my hair (and there's not much left!) on this one. Basically I'm
not getting the same result t when I "step" through the program and evaluate
each element separately than when I use the outer() function in the
FindLikelihood() function below.


Here's the functions:



Dk<- function(xk,A,B) 

{

n0 *(A*exp(-0.5*(xk/w)^2) + B)

}



FindLikelihood <- function(Nk)

{

A <- seq(0.2,3,by=0.2)

B <- seq(0.2,3,by=0.2)

k <-7

L <- outer(A, B, function(A,B) sum( (Nk*log(Dk(seq(-k,k),A,B))) -
Dk(seq(-k,k),A,B) ))
return(L)

}





where Nk <- c(70 , 67 , 75 , 77 , 74 ,102,  75, 104 , 94 , 74 , 78 , 79 , 83
, 73 , 76)




Here's an excerpt from my debug session..



 

Nk
   

[1]  70  67  75  77  74 102  75 104  94  74  78  79  83  73  76

 

debug(FindLikelihood)
   

 

L<-FindLikelihood(Nk)
   

debugging in: FindLikelihood(Nk)

debug: {

   A <- seq(0.2, 3, by = 0.2)

   B <- seq(0.2, 3, by = 0.2)

   k <- 7

   L <- outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, 

   k), A, B))) - Dk(seq(-k, k), A, B)))

   return(L)

}

Browse[1]> n

debug: A <- seq(0.2, 3, by = 0.2)

Browse[1]> n

debug: B <- seq(0.2, 3, by = 0.2)

Browse[1]> n

debug: k <- 7

Browse[1]> n

debug: L <- outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, k), 

   A, B))) - Dk(seq(-k, k), A, B)))

Browse[1]> sum((Nk * log(Dk(seq(-k, k),0.2,0.2))) - Dk(seq(-k, k), 0.2,
0.2))  # WHY DOES THIS LINE GIVE ME THE CORRECT RESULT WHEN I SUBSTITUTE
0.2, 0.2 FOR A AND B
[1] 2495.242

Browse[1]> outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, k), 

+ A, B))) - Dk(seq(-k, k), A, B)))

 [,1] [,2] [,3] [,4] [,5] [,6] [,7]
[,8]
[1,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48# BUT ELEMENT (1,1) WHICH SHOULD ALSO BE (A,B) = (0.2, 0.2),
GIVES THE INCORRECT RESULT
[2,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[3,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[4,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[5,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[6,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[7,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[8,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[9,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[10,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[11,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[12,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[13,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[14,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
[15,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
58389.48
 [,9][,10][,11][,12][,13][,14][,15]

[1,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[2,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[3,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[4,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[5,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[6,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[7,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[8,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[9,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[10,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[11,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[12,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[13,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[14,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

[15,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48

Browse[1]>



As "commented" above, when I evaluate a single A,B element (i.e. A=0.2,
B=0.2) I get a different result than when I use OUTER() which should also be
evaluating at A=0.2, B=0.2??


Any help appreciated.  I know I'm probably doing something overlooking
something simple, but can anyone point it out???


Thanks!

-Scott



Scott Norton, Ph.D.

Engineering Manager

Nanoplex Technologies, Inc.

2375 Garcia Ave.

Mountain View, CA 94043

www.nanoplextech.com



	[[

Re: [R] random number generation

2003-10-28 Thread Peter Flom
These conditions are mutually exclusive for a lot of reasons, therefore,
there's no way to generate such data.  Briefly, the normal distribution
is fully specified by the mean and variance, the other conditions are
superfluous, and, in some cases, impossible

Please tell us what you are actually trying to do and why you need to
do it, and perhaps we can help.   

Peter

Peter L. Flom, PhD
Assistant Director, Statistics and Data Analysis Core
Center for Drug Use and HIV Research
National Development and Research Institutes
71 W. 23rd St
www.peterflom.com
New York, NY 10010
(212) 845-4485 (voice)
(917) 438-0894 (fax)



>>> nmi13 <[EMAIL PROTECTED]> 10/28/2003 3:38:22 AM >>>
Hi every one,

I am trying to generate a normally distributed random variable with the

following descriptive statistics,

min=1, max=99, variance=125, mean=38.32, 1st quartile=38, median=40,
3rd 
quartile=40, skewness=-0.274.

I know the "rnorm" will allow me to simulate random numbers with mean
38.32 
and Sd=11.18(sqrt(125)). But I need to have the above mentioned
descriptive 
statistics for the data that I generate.

I would be thankful to anyone who can help me with this problem.

Regards
Murthy

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Summary : Whitehead's group sequential procedures

2003-10-28 Thread Emmanuel Charpentier
Dear List,

I recently asked about any R implementations of Whitehead's methods for 
sequential clinical trials. Here's the summary of answers so far :

1) There is no R public implementation of those methods

2) There exists an interest for such a package, which does things quite 
different from Lan-deMets paradigm (shortly : Whitehead's methods allows 
for unplanned interim analyses in any number, while Lan-deMets plans a 
fixed number of interim analyses at fixed (information) times).

3) I know how to implement simple things : design (with approximations), 
 monitoring and stopping rule. I'm still hacking my way through 
Whitehead's book to understand his procedures for the distribution of 
the total number of patients included (which is a random variable) and 
the final analysis (after study termination).

4) It has been suggested that I contact Scott Emerson, author of an S 
package called seqTrials. Does someone has an address for him ?

I'll start implementing the simple stuff, trying to create a framework 
flexible enough to allow generalization. I'll probably plea for help at 
some time in the future ...

However, don't hold your breath : I'm doing this in my spare time, on 
top of a busy schedule ...

	Emmanuel Charpentier

--
Emmanuel CharpentierTel : +33-(0)1 40 27 35 98
Secrétariat Scientifique du CEDIT   Fax : +33-(0)1 40 27 55 65
Assistance Publique - Hôpitaux de Paris
3, Avenue Victoria, F-75100 Paris RP - France
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Summary : Whitehead's group sequential procedures

2003-10-28 Thread Marc Schwartz
On Tue, 2003-10-28 at 08:16, Emmanuel Charpentier wrote:
> Dear List,
> 

SNIP

> 4) It has been suggested that I contact Scott Emerson, author of an S 
> package called seqTrials. Does someone has an address for him ?

http://www.biostat.washington.edu/index.php?page=facdir&fullprofile=yes&lastname=Emerson

Be aware that if you do a search on the product, it is called
"S+SeqTrial". The Insightful page for it is at
http://www.insightful.com/products/seqtrial/default.asp

Also, FWIW, there is another book that you might want to look at, simply
as an alternative resource:

http://www.amazon.com/exec/obidos/tg/detail/-/0849303168/
Group Sequential Methods with Applications to Clinical Trials
by Christopher Jennison, Bruce W. Turnbull
CRC Press; (September 15, 1999)
ISBN: 0849303168

Others here may have other references that would also be helpful.

HTH,

Marc Schwartz

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Visualising Moving Vectors

2003-10-28 Thread Laura Quinn
I am wanting to plot a series of wind vectors onto a contoured area map
for a series of weather stations (eg arrows showing wind speed/direction
for a particular time snapshot), can someone please advise me how best to
approach this?

My desired end point is to be able to link a time series of such data
together so that I will in effect have a "movie" displaying the evolution
of these wind vectors over time - can anyone suggest how this can be
achieved? I believe that there is a function within Image Magick whereby I
might be able to acheive this?

Thanks in advance!

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] outer function problems

2003-10-28 Thread Thomas W Blackwell
Scott  -

I agree with Spencer Graves that there's a scoping issue here:
Where does function  Dk()  pick up the values for  n0  and  w,
and does it get them from the SAME place when it's called from
inside  FindLikelihood()  as from outside ?

But more important is this one:  All arithmetic on vectors or
matrices is done element by element;  every matrix or array is
treated as a vector (no 'dim' attribute) during this process,
and "the elements of shorter vectors are recycled as necessary"
(quoting from  help("Arithmetic")).  Therefore,

Dk(seq(-k,k), 0.2, 0.2)

should return a vector of length (2 * k + 1),  and

Nk * log(Dk())
#  (I omit the arguments to Dk() here.)

should produce a vector of length  max(length(Nk), 2 * k + 1)
in which element 1 of Nk is paired with  xk = -k,  element 2
of Nk is paired with  xk = (-k + 1), et cetera.  This product
then has a vector of length  (2 * k + 1)  subtracted from it
and the resulting vector is summed.

Now, maybe you have promised to only call  FindLikelihood()
with an argument  Nk  of length 15 = (2 * k + 1), in which
case all the lengths match and element i from Nk is always
paired with the value (i - 8) in the first argument of Dk(),
but there's certainly a lack of defensive programming here.

An alternate way to calculate the grid of likelihood values
which seems to be your intention is to explicitly build four
four-dimensional arrays named  A, B, xk and Nk, all with the
same dimensions, and with the values changing along only one
dimension in each array.  Then do whatever arithmetic you
want with these four arrays (such as the expressions inside
Dk() and  FindLikelihood()  and collapse the result by summing
over rows or slices or whatever at the end.  The functions
array(), aperm(), matrix() and '%*%' are useful in this process.
This business of four or five-dimensional arrays is one I use
routinely.  The result is equivalent to as.vector(outer( ...)),
but it forces you to think carefully about the various dimensions.

HTH  -  tom blackwell  -  u michigan medical school  -  ann arbor  -

On Mon, 27 Oct 2003, Scott Norton wrote:

> I'm pulling my hair (and there's not much left!) on this one. Basically I'm
> not getting the same result t when I "step" through the program and evaluate
> each element separately than when I use the outer() function in the
> FindLikelihood() function below.
>
>
>
> Here's the functions:
>
>
>
> Dk<- function(xk,A,B)
>
> {
>
> n0 *(A*exp(-0.5*(xk/w)^2) + B)
>
> }
>
>
>
> FindLikelihood <- function(Nk)
>
> {
>
> A <- seq(0.2,3,by=0.2)
>
> B <- seq(0.2,3,by=0.2)
>
> k <-7
>
> L <- outer(A, B, function(A,B) sum( (Nk*log(Dk(seq(-k,k),A,B))) -
> Dk(seq(-k,k),A,B) ))
>
> return(L)
>
> }
>
>
>
>
>
> where Nk <- c(70 , 67 , 75 , 77 , 74 ,102,  75, 104 , 94 , 74 , 78 , 79 , 83
> , 73 , 76)
>
>
>
>
>
> Here's an excerpt from my debug session..
>
>
>
> > Nk
>
>  [1]  70  67  75  77  74 102  75 104  94  74  78  79  83  73  76
>
> > debug(FindLikelihood)
>
> > L<-FindLikelihood(Nk)
>
> debugging in: FindLikelihood(Nk)
>
> debug: {
>
> A <- seq(0.2, 3, by = 0.2)
>
> B <- seq(0.2, 3, by = 0.2)
>
> k <- 7
>
> L <- outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k,
>
> k), A, B))) - Dk(seq(-k, k), A, B)))
>
> return(L)
>
> }
>
> Browse[1]> n
>
> debug: A <- seq(0.2, 3, by = 0.2)
>
> Browse[1]> n
>
> debug: B <- seq(0.2, 3, by = 0.2)
>
> Browse[1]> n
>
> debug: k <- 7
>
> Browse[1]> n
>
> debug: L <- outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, k),
>
> A, B))) - Dk(seq(-k, k), A, B)))
>
> Browse[1]> sum((Nk * log(Dk(seq(-k, k),0.2,0.2))) - Dk(seq(-k, k), 0.2,
> 0.2))  # WHY DOES THIS LINE GIVE ME THE CORRECT RESULT WHEN I SUBSTITUTE
> 0.2, 0.2 FOR A AND B
>
> [1] 2495.242
>
> Browse[1]> outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, k),
>
> + A, B))) - Dk(seq(-k, k), A, B)))
>
>   [,1] [,2] [,3] [,4] [,5] [,6] [,7]
> [,8]
>
>  [1,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48# BUT ELEMENT (1,1) WHICH SHOULD ALSO BE (A,B) = (0.2, 0.2),
> GIVES THE INCORRECT RESULT
>
>  [2,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
>  [3,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
>  [4,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
>  [5,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
>  [6,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
>  [7,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
>  [8,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
>  [9,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
> [10,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
> [11,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
> 58389.48
>
> [12,] 58389.48

Re: [R] formula parsing, using parts ...

2003-10-28 Thread Thomas Lumley
On Tue, 28 Oct 2003, Russell Senior wrote:

> > "Uwe" == Uwe Ligges <[EMAIL PROTECTED]> writes:
>
>
> Uwe> See ?formula and its "See Also" Section on how to do formula
> Uwe> manipulation. There's also an example on how to construct a
> Uwe> formula.
>
> Russell> In order to use the 'as.formula(paste(response," ~
> Russell> ",factor))' approach, response and factor seem to need to be
> Russell> strings (at least they seem to if response is "log(x)" or the
> Russell> like).  Whereas, for pairwise.t.test they need to be names.
> Russell> What is the proper way to do that?
>

I'd actually advise a different strategy.  Consider:

data(airquality)
formula<- log(Ozone)~factor(Month)

m<-lm(formula,data=data)
a<-anova(m)

mf<-model.frame(lm)

pairwise.t.test(mf[,1], mf[,2])


-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] error message in simulation

2003-10-28 Thread Tu Yu-Kang
Dear R-users,

I am a dentist (so forgive me if my question looks stupid) and came across 
a problem when I did simulations to compare a few single level and two 
level regressions.

The simulations were interrupted and an error message came out like 'Error 
in MEestimate(lmeSt, grps) : Singularity in backsolve at level 0, block 1'.

My collegue suggested that this might be due to my codes was not efficient 
and ran out of memory.  If this is the reason, could you please help me 
improve my codes writing.

However, as I slightly changed the parameters, it ran well.  So I suspect 
memory is problem.

I use R 1.8.0 and Windows XP professional.  My computer has a Pentium 4 2.4 
with 512 MB memory.

Thanks in advance.

best regards,

Yu-Kang Tu

Clinical Research Fellow
Leeds Dental Institute
University of Leeds
## change scores simulation
close.screen(all=TRUE)
split.screen(c(3,3))
nitns<-1
nsims<-100
r<-0.1
param1<-c(1:nitns)
param2<-c(1:nitns)
param3<-c(1:nitns)
param4<-c(1:nitns)
param5<-c(1:nitns)
param6<-c(1:nitns)
param7<-c(1:nitns)
param8<-c(1:nitns)
param9<-c(1:nitns)
param10<-c(1:nitns)
param11<-c(1:nitns)
param12<-c(1:nitns)
param13<-c(1:nitns)
param14<-c(1:nitns)
for(itn in 1:nitns){
g<-rbinom(nsims,1,0.5)
b<-rnorm(nsims,0,1)*10
rn<-rnorm(nsims,0,1)*10
a<-b*r+rn*(1-r^2)^0.5
a<-round(a)+50
a<-a-g*5
b<-round(b)+50
abs.2<-function(x) ifelse(x<1,1,x)
b<-abs.2(b)
c<-b-a
p<-c/b
lm1<-lm(a~g)
lm2<-lm(c~g)
lm3<-lm(p~g)
lm4<-lm(a~b+g)
gr<-c(g,g)
occasion<-rep(0:1,c(nsims,nsims))
occ<-occasion-0.5
ppd<-c(b,a)
h<-rep(0,nsims)
mb<-mean(b)
bppd<-b-mb
bappd<-c(h,bppd)
occgr<-occ*gr
subject<-c(1:nsims)
sub<-c(subject,subject)
library(nlme)
lm5<-lme(ppd~occ+gr+occgr,random=~1|sub)
lm5f<-fixed.effects(lm5)
lm5c<-as.matrix(lm5f)
lm5a<-anova(lm5)
lm6<-lme(ppd~occ+gr+occgr,random=~1|sub,method="ML")
lm6f<-fixed.effects(lm6)
lm6c<-as.matrix(lm6f)
lm6a<-anova(lm6)
lm7<-lme(ppd~occ+gr+occgr+bappd,random=~1|sub,method="ML")
lm7f<-fixed.effects(lm7)
lm7c<-as.matrix(lm7f)
lm7a<-anova(lm7)
param1[itn]<-coef(summary(lm1))[2,1]
param2[itn]<-coef(summary(lm2))[2,1]
param3[itn]<-coef(summary(lm3))[2,1]
param4[itn]<-coef(summary(lm4))[3,1]
param5[itn]<-lm5c[4,1]
param6[itn]<-lm6c[4,1]
param7[itn]<-lm7c[4,1]
param8[itn]<-coef(summary(lm1))[2,4]
param9[itn]<-coef(summary(lm2))[2,4]
param10[itn]<-coef(summary(lm3))[2,4]
param11[itn]<-coef(summary(lm4))[3,4]
param12[itn]<-lm5a[4,4]
param13[itn]<-lm6a[4,4]
param14[itn]<-lm7a[4,4]
}
#the error message came out here.
#But if I change some of the variables to:
b<-rnorm(nsims,0,1)*2
rn<-rnorm(nsims,0,1)*2
a<-b*r+rn*(1-r^2)^0.5
a<-round(a)+7
a<-a-g*2
b<-round(b)+9
abs.1<-function(x) ifelse(x<5,5,x)
b<-abs.1(b)
abs.2<-function(x) ifelse(x<1,1,x)
a<-abs.2(a)
There is no poblem to complete the simulations.

_
現在就上 MSN 台灣網站:與親朋好友緊密聯繫,即時掌握新聞、財經、娛樂的最新訊
息 http://msn.com.tw
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Re : Hello

2003-10-28 Thread Rebecka
Hiya, 
Sorry I haven't been I contact with you for a few days, my little 
sister Alena broken here leg.she's was playing around 
and fell down the stairs.HeeHee.shouldn't laugh 
Anyway. 
I was SO gutted that i missed you the other night...i was really 
looking forward to 'Cam' for you - you made me SO wet last time!! 
Have you got your webcam yet ?? let me know 
i cant wait to see you on your camMmmm...I suppose better 
go now and do some work.get back to me and we'll arrange 
another Web cam session.Mmmm.we have just installed a 
new webcam system that will allow you to see us much faster 
and clearer, like we were on the television !! check it out here 

www.CzechCamGirls.eu.tt 

I'm normally around most days - as you know :) but Alena will be 
out of 'action' for a few weeks due to her broken leg - plus the 
other girls ! ( But me and Alena are the bestim sure you agree 
..!! ) 

Anyway..i look forward to SEEING you online soon 

Yours.Waiting.. 

Rebecka 
~ X ~ 
P.s You can also contact me via ICQ.Me details are on the 
Website above

~ X ~
















NEVER SEND SPAM. IT IS BAD.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] formula parsing, using parts ...

2003-10-28 Thread Spencer Graves
 I got errors from Prof. Lumley's code, but the following 
modification produced for me something that seemed to fit his description: 

data(airquality)
formula<- log(Ozone)~factor(Month)
m<-lm(formula,data=airquality)
a<-anova(m)
mf<-model.frame(m)

pairwise.t.test(mf[,1], mf[,2])

 hope this helps.  spencer graves

Thomas Lumley wrote:

On Tue, 28 Oct 2003, Russell Senior wrote:

 

"Uwe" == Uwe Ligges <[EMAIL PROTECTED]> writes:
 

Uwe> See ?formula and its "See Also" Section on how to do formula
Uwe> manipulation. There's also an example on how to construct a
Uwe> formula.
Russell> In order to use the 'as.formula(paste(response," ~
Russell> ",factor))' approach, response and factor seem to need to be
Russell> strings (at least they seem to if response is "log(x)" or the
Russell> like).  Whereas, for pairwise.t.test they need to be names.
Russell> What is the proper way to do that?
   

I'd actually advise a different strategy.  Consider:

data(airquality)
formula<- log(Ozone)~factor(Month)
m<-lm(formula,data=data)
a<-anova(m)
mf<-model.frame(lm)

pairwise.t.test(mf[,1], mf[,2])

	-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Visualising Moving Vectors

2003-10-28 Thread "Hüsing, Johannes"
> My desired end point is to be able to link a time series of such data
> together so that I will in effect have a "movie" displaying 
> the evolution
> of these wind vectors over time - can anyone suggest how this can be
> achieved? I believe that there is a function within Image 
> Magick whereby I
> might be able to acheive this?

Animated gif is one option to assemble pictures to a "movie". Another
way, if you would like to go the TeX-PDF-way, would be texpower 
(texpower.sourceforge.net). 

Cheers 


Johannes

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Re: Starting and Terminating the JVM for package SJava

2003-10-28 Thread Duncan Temple Lang
ZABALZA-MEZGHANI Isabelle wrote:
> Hello,
> 
> I would like to know if there is a possibility to open an R session via Java
> (using the SJava package), then to terminate it, and re-run another.

At present, there is no code in the R system to terminate a session
and shut down the engine.  I have experimented putting it into R but
have not finished it. So it is a feature of R that needs to be added.
Nothing about SJava will prohibit us from using it.

> It seems not to be possible. If this is the case, I would like to understand
> where is the problem or the limitation (is it due to the SJava
> implementation, to the Java behavior, or to the R application).
> In fact, I am interesting in re-starting new R sessions during a same Java
> session to manage memory problems in R with large datasets and numerous
> commands through SJava interface (just to "clean" memory).

You might just call the R function gc() periodically (probably from
within Java).  And if you have R objects as Java references , there
are ways to clear these out too.  So, most likely, you don't actually
want to shut down R and restart it.  Instead, you just want to clean
up.


> 
> Waiting for your help,
> 
> Regards,
> 
> Isabelle.
> 
> 
> Isabelle Zabalza-Mezghani
> IFP - Reservoir Engineering Department
> Rueil-Malmaison / France
> Tel : +33 1 47 52 61 99

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] error message in simulation

2003-10-28 Thread Thomas W Blackwell
Yu-Kang  -

Simulations by their nature use randomly generated data.
Sometimes the random data doesn't contain enough information
to fully determine the parameter estimates for one iteration
or another.  It seems likely that that is what happened here.
The design matrix is singular for one iteration (maybe there
are NO simulated subjects in one arm of the trial) and
backsolve() very properly returns an error message.  On the
very next iteration, with a different randomly-generated data
set, everything should work, again.

The function  try()  allows you to get past these problematic
iterations.  It's better, of course, to figure out what the
requirements for a fully-specified data set are, and arrange
the random generator so that these are guaranteed to be met.

-  tom blackwell  -  u michigan medical school  -  ann arbor  -

On Tue, 28 Oct 2003, Tu Yu-Kang wrote:

> Dear R-users,
>
> I am a dentist (so forgive me if my question looks stupid) and came across
> a problem when I did simulations to compare a few single level and two
> level regressions.
>
> The simulations were interrupted and an error message came out like 'Error
> in MEestimate(lmeSt, grps) : Singularity in backsolve at level 0, block 1'.
>
> My collegue suggested that this might be due to my codes was not efficient
> and ran out of memory.  If this is the reason, could you please help me
> improve my codes writing.
>
> However, as I slightly changed the parameters, it ran well.  So I suspect
> memory is problem.
>
> I use R 1.8.0 and Windows XP professional.  My computer has a Pentium 4 2.4
> with 512 MB memory.
>
> Thanks in advance.
>
> best regards,
>
> Yu-Kang Tu
>
> Clinical Research Fellow
> Leeds Dental Institute
> University of Leeds
>
> ## change scores simulation
> close.screen(all=TRUE)
> split.screen(c(3,3))
> nitns<-1
> nsims<-100
> r<-0.1
> param1<-c(1:nitns)
> param2<-c(1:nitns)
> param3<-c(1:nitns)
> param4<-c(1:nitns)
> param5<-c(1:nitns)
> param6<-c(1:nitns)
> param7<-c(1:nitns)
> param8<-c(1:nitns)
> param9<-c(1:nitns)
> param10<-c(1:nitns)
> param11<-c(1:nitns)
> param12<-c(1:nitns)
> param13<-c(1:nitns)
> param14<-c(1:nitns)
> for(itn in 1:nitns){
> g<-rbinom(nsims,1,0.5)
> b<-rnorm(nsims,0,1)*10
> rn<-rnorm(nsims,0,1)*10
> a<-b*r+rn*(1-r^2)^0.5
> a<-round(a)+50
> a<-a-g*5
> b<-round(b)+50
> abs.2<-function(x) ifelse(x<1,1,x)
> b<-abs.2(b)
> c<-b-a
> p<-c/b
> lm1<-lm(a~g)
> lm2<-lm(c~g)
> lm3<-lm(p~g)
> lm4<-lm(a~b+g)
> gr<-c(g,g)
> occasion<-rep(0:1,c(nsims,nsims))
> occ<-occasion-0.5
> ppd<-c(b,a)
> h<-rep(0,nsims)
> mb<-mean(b)
> bppd<-b-mb
> bappd<-c(h,bppd)
> occgr<-occ*gr
> subject<-c(1:nsims)
> sub<-c(subject,subject)
> library(nlme)
> lm5<-lme(ppd~occ+gr+occgr,random=~1|sub)
> lm5f<-fixed.effects(lm5)
> lm5c<-as.matrix(lm5f)
> lm5a<-anova(lm5)
> lm6<-lme(ppd~occ+gr+occgr,random=~1|sub,method="ML")
> lm6f<-fixed.effects(lm6)
> lm6c<-as.matrix(lm6f)
> lm6a<-anova(lm6)
> lm7<-lme(ppd~occ+gr+occgr+bappd,random=~1|sub,method="ML")
> lm7f<-fixed.effects(lm7)
> lm7c<-as.matrix(lm7f)
> lm7a<-anova(lm7)
> param1[itn]<-coef(summary(lm1))[2,1]
> param2[itn]<-coef(summary(lm2))[2,1]
> param3[itn]<-coef(summary(lm3))[2,1]
> param4[itn]<-coef(summary(lm4))[3,1]
> param5[itn]<-lm5c[4,1]
> param6[itn]<-lm6c[4,1]
> param7[itn]<-lm7c[4,1]
> param8[itn]<-coef(summary(lm1))[2,4]
> param9[itn]<-coef(summary(lm2))[2,4]
> param10[itn]<-coef(summary(lm3))[2,4]
> param11[itn]<-coef(summary(lm4))[3,4]
> param12[itn]<-lm5a[4,4]
> param13[itn]<-lm6a[4,4]
> param14[itn]<-lm7a[4,4]
> }
> #the error message came out here.
> #But if I change some of the variables to:
> b<-rnorm(nsims,0,1)*2
> rn<-rnorm(nsims,0,1)*2
> a<-b*r+rn*(1-r^2)^0.5
> a<-round(a)+7
> a<-a-g*2
> b<-round(b)+9
> abs.1<-function(x) ifelse(x<5,5,x)
> b<-abs.1(b)
> abs.2<-function(x) ifelse(x<1,1,x)
> a<-abs.2(a)
>
> There is no poblem to complete the simulations.
>
> _
> 現在就上 MSN 台灣網站:與親朋好友緊密聯繫,即時掌握新聞、財經、娛樂的最新訊
> 息 http://msn.com.tw
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] presentation of software

2003-10-28 Thread Owen, Jason
Hello,

I am considering giving a talk at my university
on R to (mostly) academics.  There wouldn't be any
statisticians, but professors from mathematics,
psychology, economics, etc. who do use some statistical
software in teaching and/or research, and have an acquaintance
with procedures and graphics used in statistics.  Has anyone
given such a talk to a similar audience?  If so, I would be
interested in seeing what you talked about.  Please
send me your talk, outline, or whatever materials you
have.  I want to design an "R is the way" -type talk.

Jason

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] outer function problems

2003-10-28 Thread Scott Norton
Thanks Spencer and Tom for your help!

  Besides the other errors, I realized last night that I'm making a
fundmental error in my interpretation of the outer function.  The following
short code snippet highlights my confusion.

f<-function(A,B) { sum(A+B) }
outer(1:3,2:4,f)
 [,1] [,2] [,3]
[1,]   45   45   45
[2,]   45   45   45
[3,]   45   45   45

I had *thought* that outer() would give:
 [,1] [,2] [,3]
[1,]   3 45
[2,]   4 56
[3,]   5 67

ie. take each combination from A = 1,2,3; B=2,3,4 such as A=1,B=2 put it in
the sum function, get [1,1]=3 ... 
Then grab A[2]=2,B[1]=2, put them in the sum() function to get [2,1]=4,
etc... That "seems" to be the way the instructions explain "outer", i.e.
element-by-element computation of FUN() 
"Description:

 The outer product of the arrays 'X' and 'Y' is the array 'A' with
 dimension 'c(dim(X), dim(Y))' where element 'A[c(arrayindex.x,
 arrayindex.y)] = FUN(X[arrayindex.x], Y[arrayindex.y], ...)'."

Since my interpretation is *definitely* wrong, could someone put in words
how "OUTER" handles the argument vectors and the functional call with
reference to the preceding example?  
Also, what I need to happen in my code is to actually take each combination
of elements from vectors, A and B, and "feed" them repeatedly into a
function, generating a matrix of results.  How then do I do that?

Thanks in advance!!! 
-Scott

Scott Norton, Ph.D.
Engineering Manager
Nanoplex Technologies, Inc.
2375 Garcia Ave.
Mountain View, CA 94043
www.nanoplextech.com


-Original Message-
From: Spencer Graves [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 28, 2003 8:13 AM
To: Scott Norton
Cc: [EMAIL PROTECTED]
Subject: Re: [R] outer function problems

  I don't know that this is your problem, but I see a potential 
scoping issue:  It is not obvious to me where Dk is getting n0 and w.  
I've solved this kind of problem in the past by declaring n0 and w as 
explicit arguments to Dk and then passing them explicitly via "..." in 
"outer".  In general, I prefer to avoid accessing globals from within 
functions.  This may not help you here, but it might help in the future. 

  hope this helps.  spencer graves

Scott Norton wrote:

>I'm pulling my hair (and there's not much left!) on this one. Basically I'm
>not getting the same result t when I "step" through the program and
evaluate
>each element separately than when I use the outer() function in the
>FindLikelihood() function below.
>
> 
>
>Here's the functions:
>
> 
>
>Dk<- function(xk,A,B) 
>
>{
>
>n0 *(A*exp(-0.5*(xk/w)^2) + B)
>
>}
>
> 
>
>FindLikelihood <- function(Nk)
>
>{
>
>A <- seq(0.2,3,by=0.2)
>
>B <- seq(0.2,3,by=0.2)
>
>k <-7
>
>L <- outer(A, B, function(A,B) sum( (Nk*log(Dk(seq(-k,k),A,B))) -
>Dk(seq(-k,k),A,B) ))
>
>return(L)
>
>}
>
> 
>
> 
>
>where Nk <- c(70 , 67 , 75 , 77 , 74 ,102,  75, 104 , 94 , 74 , 78 , 79 ,
83
>, 73 , 76)
>
> 
>
> 
>
>Here's an excerpt from my debug session..
>
> 
>
>  
>
>>Nk
>>
>>
>
> [1]  70  67  75  77  74 102  75 104  94  74  78  79  83  73  76
>
>  
>
>>debug(FindLikelihood)
>>
>>
>
>  
>
>>L<-FindLikelihood(Nk)
>>
>>
>
>debugging in: FindLikelihood(Nk)
>
>debug: {
>
>A <- seq(0.2, 3, by = 0.2)
>
>B <- seq(0.2, 3, by = 0.2)
>
>k <- 7
>
>L <- outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, 
>
>k), A, B))) - Dk(seq(-k, k), A, B)))
>
>return(L)
>
>}
>
>Browse[1]> n
>
>debug: A <- seq(0.2, 3, by = 0.2)
>
>Browse[1]> n
>
>debug: B <- seq(0.2, 3, by = 0.2)
>
>Browse[1]> n
>
>debug: k <- 7
>
>Browse[1]> n
>
>debug: L <- outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, k), 
>
>A, B))) - Dk(seq(-k, k), A, B)))
>
>Browse[1]> sum((Nk * log(Dk(seq(-k, k),0.2,0.2))) - Dk(seq(-k, k), 0.2,
>0.2))  # WHY DOES THIS LINE GIVE ME THE CORRECT RESULT WHEN I
SUBSTITUTE
>0.2, 0.2 FOR A AND B
>
>[1] 2495.242
>
>Browse[1]> outer(A, B, function(A, B) sum((Nk * log(Dk(seq(-k, k), 
>
>+ A, B))) - Dk(seq(-k, k), A, B)))
>
>  [,1] [,2] [,3] [,4] [,5] [,6] [,7]
>[,8]
>
> [1,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48# BUT ELEMENT (1,1) WHICH SHOULD ALSO BE (A,B) = (0.2, 0.2),
>GIVES THE INCORRECT RESULT
>
> [2,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
> [3,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
> [4,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
> [5,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
> [6,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
> [7,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
> [8,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
> [9,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
>[10,] 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48 58389.48
>58389.48
>
>[11,] 5

[R] setting up complicated ANOVA in R

2003-10-28 Thread Bill Shipley
Hello.  I am about to do a rather complicated analysis and am not sure
how to do it.  The experiment has a split-plot design and also repeated
measures.  Both of these complications require one to define an error
term and it seems that one cannot specify two such terms.  The
split-plot command is:

 

aov(y~covariates +A*B+Error(C), data=) where A and B are the fixed
effects and C is the plot-level source of error.

 

The repeated measures command is:

aov(y~covariates+A*B*time + Error(subject), data=) where subject is the
error source for the repeated measures over time.

 

Can these be somehow combined to include a split-plot &
repeated-measures design?  If not, can I perhaps use a mixed-model
analysis with random subjects nested within the whole-plot?

 

Any suggestions or leads are appreciated.

 

Bill Shipley

Associate Editor, Ecology

North American Editor, Annals of Botany

Département de biologie, Université de Sherbrooke,

Sherbrooke (Québec) J1K 2R1 CANADA

[EMAIL PROTECTED]

 
http://callisto.si.usherb.ca:8080/bshipley/

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] setting up complicated ANOVA in R

2003-10-28 Thread Prof Brian Ripley
Error can be a formula, and would normally be in split-plot designs.
Examples in MASS4, chapter 10, for example.

Unless you have exact balance, use lme.

On Tue, 28 Oct 2003, Bill Shipley wrote:

> Hello.  I am about to do a rather complicated analysis and am not sure
> how to do it.  The experiment has a split-plot design and also repeated
> measures.  Both of these complications require one to define an error
> term and it seems that one cannot specify two such terms.  The
> split-plot command is:
> 
>  
> 
> aov(y~covariates +A*B+Error(C), data=) where A and B are the fixed
> effects and C is the plot-level source of error.
> 
>  
> 
> The repeated measures command is:
> 
> aov(y~covariates+A*B*time + Error(subject), data=) where subject is the
> error source for the repeated measures over time.
> 
>  
> 
> Can these be somehow combined to include a split-plot &
> repeated-measures design?  If not, can I perhaps use a mixed-model
> analysis with random subjects nested within the whole-plot?
> 
>  
> 
> Any suggestions or leads are appreciated.
> 
>  
> 
> Bill Shipley
> 
> Associate Editor, Ecology
> 
> North American Editor, Annals of Botany
> 
> Département de biologie, Université de Sherbrooke,
> 
> Sherbrooke (Québec) J1K 2R1 CANADA
> 
> [EMAIL PROTECTED]
> 
>  
> http://callisto.si.usherb.ca:8080/bshipley/
> 
>  
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> 
> 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] problem with the installed R script

2003-10-28 Thread Liaw, Andy
Dear R-help,

I had a problem running a Perl script on AIX inside pipe() or system().  The
Perl script was not finding some modules when run from within R.  The
sysadmin tracked it down to a problem with R_LD_LIBRARY_PATH in the
installed /usr/local/bin/R script:

original:
:
${R_LD_LIBRARY_PATH=${R_HOME}/bin:/opt/freeware/lib:/usr/local/lib:/opt/free
ware/lib:/usr/local/lib:/opt/freeware/lib:/usr/local/lib:${R_HOME}/bin}

new:
:
${R_LD_LIBRARY_PATH=${R_HOME}/bin:/usr/local/lib:/opt/freeware/lib:/usr/loca
l/lib:/opt/freeware/lib:/usr/local/lib:/opt/freeware/lib:/usr/local/lib:${R_
HOME}/bin}

I was told that our installation of Perl must look in /usr/local/lib for the
modules to load.  This directory was at the end of "original" path and
therefore Perl was not able to load the modules.  Does anyone know what, if
anything, needs to be done at the configure step to correct this problem, so
we don't need to fix this by hand every time?

BTW, this is R-1.7.1 (they've been having problem getting 1.8.0 to compile
on that system).

Thanks!
Andy


Andy Liaw, PhD
Biometrics Research  PO Box 2000, RY33-300 
Merck Research Labs   Rahway, NJ 07065
mailto:[EMAIL PROTECTED]732-594-0820

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] 'levelplot' with an option 'at'

2003-10-28 Thread Martina Pavlicova

Hi all,
I encountered a difference between versions 1.6.1 and 1.7.0 when using
levelplot with an option 'at'. Here are the specs of the two platforms
used:

> R.version
 _
platform sparc-sun-solaris2.8
arch sparc
os   solaris2.8
system   sparc, solaris2.8
status
major1
minor6.1
year 2002
month11
day  01
language R

> R.version
 _
platform i686-pc-linux-gnu
arch i686
os   linux-gnu
system   i686, linux-gnu
status
major1
minor7.0
year 2003
month04
day  16
language R


I created an easy example of two levelplots (one without an option 'at'
and one with an option 'at') which I run through version 1.6.1. The plot
are called:
version161.without_at.jpg
version161.with_at.jpg

After update to version 1.7.0, I run the same two plots and I got the
following files:
version170.without_at.jpg
version170.with_at.jpg
When I don't include the option 'at' into a levelplot, the plots
version161.without_at.jpg and version170.without_at.jpg are similar
(differ only in labels for contours). BUT if I include the option 'at',
version 1.7.0 produces very different picture which I believe is wrong
(compare files version161.with_at.jpg and version170.with_at.jpg). WHY is
that? Am I missing something?

I have attached all 4 plots and also the commands I used to create the
small example.

Thank you for all your help.

Martina Pavlicova

-
library(lattice)

x <- row(matrix(NA,11,11))-6
y <- col(matrix(NA,11,11))-6
z <- x*y

jpeg("version161.without_at.jpg")
foo <- levelplot(z~x*y,contour=T, cuts=5,
 ##at=c(-10,-5,0,5,10),
 panel=function(x,y,...){
   panel.levelplot(x,y,  ...)
 })
print(foo)
dev.off()

jpeg("version161.with_at.jpg")
foo <- levelplot(z~x*y,contour=T, cuts=5,
 at=c(-10,-5,0,5,10),
 panel=function(x,y,...){
   panel.levelplot(x,y,  ...)
 })
print(foo)
dev.off()



--
Department of Statistics Office Phone: (614) 292-1567
1958 Neil Avenue, 304E Cockins Hall  FAX: (614) 292-2096
The Ohio State UniversityE-mail: [EMAIL PROTECTED]
Columbus, OH 43210-1247  www.stat.ohio-state.edu/~pavlicov
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Confidence ellipse for correlation

2003-10-28 Thread Tanya Murphy
Hello,

SAS' point and click interface has the option of produce a scatterplot with a 
superimposed confidence ellipse for the correlation coefficient. Since I 
generally like R so much better, I would like to reproduce this in R. I've 
been playing with the ellipse package. In order to have the points and the 
ellipse on the same graph I've done the following. 
(Load ellipse package...)
> data(Puromycin)
> attach(Puromycin)
> my<-mean(rate)
> mx<-mean(conc)
> sdy<-sd(rate)
> sdx<-sd(conc)
> r<-cor(conc,rate)
> plot(ellipse(r,scale=c(sdx,sdy),centre=c(mx,my)),type='l')
> points(conc,rate)

1) Is my use of 'scale' and 'centre' theoretically correct?
2) Is there a more efficient way to get the 5 parameters? (I guess I could 
write a little function, but has it already been done?)

The non-linear relationship between these variables brings up another point: 
Is there a way to plot a contour (empirical?) containing, say, 95% of the 
values.

Thanks for your time!

Tanya

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Visualising Moving Vectors

2003-10-28 Thread Ted Harding
On 28-Oct-03 Laura Quinn wrote:
> I am wanting to plot a series of wind vectors onto a contoured area map
> for a series of weather stations (eg arrows showing wind
> speed/direction for a particular time snapshot), can someone please
> advise me how best to approach this?
> 
> My desired end point is to be able to link a time series of such data
> together so that I will in effect have a "movie" displaying the
> evolution of these wind vectors over time - can anyone suggest how
> this can be achieved? I believe that there is a function within Image
> Magick whereby I might be able to acheive this?
> 
> Thanks in advance!

ImageMagick (which is a suite of several stand-alone programs) includes
the program 'animate' which does just this. If you have your sequence
of files in alphabetical sort order (e.g. view001.png, view002.png, ...)
then

  animate view*.png

will cycle through them on order (and rather briskly; however, you
can use the "-delay" option to choose your frame speed). There's
also a facility to merge a sequence of image files into a single
file, an "animated GIF", which can be "played" on any standard
Web Browser, only I don't recall the details.

Hoping this helps,
Ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 167 1972
Date: 28-Oct-03   Time: 18:58:10
-- XFMail --

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Loading a "sub-package"

2003-10-28 Thread Ted Harding
Hi Folks,

The inspiration for this query is described below, but it
prompts a general question:

If one wants to use only one or a few functions from a library,
is there a way to load only these, without loading the library,
short of going into the package source and extracting what is
needed (including of course any auxiliary functions and compiled
code they may depend on)?

What prompted this is that I needed to simulate some MVnormal data.
There's a useful function 'mvrnorm' which does just that, in the
library/package MASS. So that's what I used.

But MASS is huge! Hence the query. (I've also had occasion to filch
single functions from other libraries as well).

(Of course, in reality I've noticed that 'mvrnorm' is a few lines
of pure R code which can easily be lifted out and run "stand-alone",
so the issue is not really a problem in this case. But I think the
question is a good one, well exemplified by the case described.)

With thanks,
Ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 167 1972
Date: 28-Oct-03   Time: 18:23:42
-- XFMail --

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] outer function problems

2003-10-28 Thread Thomas Lumley
On Tue, 28 Oct 2003, Scott Norton wrote:

> Thanks Spencer and Tom for your help!
>
>   Besides the other errors, I realized last night that I'm making a
> fundmental error in my interpretation of the outer function.  The following
> short code snippet highlights my confusion.
>
> f<-function(A,B) { sum(A+B) }
> outer(1:3,2:4,f)
>  [,1] [,2] [,3]
> [1,]   45   45   45
> [2,]   45   45   45
> [3,]   45   45   45
>
> I had *thought* that outer() would give:
>  [,1] [,2] [,3]
> [1,]   3 45
> [2,]   4 56
> [3,]   5 67

This is actually a FAQ


> ie. take each combination from A = 1,2,3; B=2,3,4 such as A=1,B=2 put it in
> the sum function, get [1,1]=3 ...
> Then grab A[2]=2,B[1]=2, put them in the sum() function to get [2,1]=4,
> etc... That "seems" to be the way the instructions explain "outer", i.e.
> element-by-element computation of FUN()
> "Description:
>
>  The outer product of the arrays 'X' and 'Y' is the array 'A' with
>  dimension 'c(dim(X), dim(Y))' where element 'A[c(arrayindex.x,
>  arrayindex.y)] = FUN(X[arrayindex.x], Y[arrayindex.y], ...)'."
>
> Since my interpretation is *definitely* wrong, could someone put in words
> how "OUTER" handles the argument vectors and the functional call with
> reference to the preceding example?

As the description says, outer() constructs *one* call to FUN, replicating
all the arguments.

As the Details section says
 'FUN' must be a function (or the name of it) which expects at
 least two arguments and which operates elementwise on arrays.

Your function f() doesn't.


> Also, what I need to happen in my code is to actually take each combination
> of elements from vectors, A and B, and "feed" them repeatedly into a
> function, generating a matrix of results.  How then do I do that?

One general-purpose way is to use mapply

outer(a,b, function(ai,bj) mapply(f,ai,bj))

The reason this isn't the default is that it is fairly slow.  If your
function f() can be vectorised then that will give much better
performance. For example, compare
  outer(a,b, function(ai,bj) mapply(sum,ai,bj))
and
  outer(a,b, "+")
on largish a and b.

An automatic solution *has to* use a loop, and length(a)*length(b)
evaluations of the function.

-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Confidence ellipse for correlation

2003-10-28 Thread Prof Brian Ripley
On Tue, 28 Oct 2003, Tanya Murphy wrote:

> Hello,
> 
> SAS' point and click interface has the option of produce a scatterplot with a 
> superimposed confidence ellipse for the correlation coefficient. Since I 
> generally like R so much better, I would like to reproduce this in R. I've 
> been playing with the ellipse package. In order to have the points and the 
> ellipse on the same graph I've done the following. 
> (Load ellipse package...)
> > data(Puromycin)
> > attach(Puromycin)
> > my<-mean(rate)
> > mx<-mean(conc)
> > sdy<-sd(rate)
> > sdx<-sd(conc)
> > r<-cor(conc,rate)
> > plot(ellipse(r,scale=c(sdx,sdy),centre=c(mx,my)),type='l')
> > points(conc,rate)
> 
> 1) Is my use of 'scale' and 'centre' theoretically correct?

Depends on whose theory you have in mind!  This is not `a confidence
ellipse for the correlation coefficient', as confidence ellipses are for
pairs of parameters, not variables.  It seems to be a plot of a contour of
the fitted bivariate normal.

> 2) Is there a more efficient way to get the 5 parameters? (I guess I could 
> write a little function, but has it already been done?)

You could do things like
mxy <- mean(Puromycin[c("rate", "conc")])
sxy <- sapply(Puromycin[c("rate", "conc")], sd)

> The non-linear relationship between these variables brings up another point: 
> Is there a way to plot a contour (empirical?) containing, say, 95% of the 
> values.

Yes.  You need a 2D density estimate (e.g. kde2d in MASS) then compute the
density values at the points and draw the contour of the density which
includes 95% of the points (at a level computed from the sorted values via
quantile()).

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Loading a "sub-package"

2003-10-28 Thread Prof Brian Ripley
On Tue, 28 Oct 2003 [EMAIL PROTECTED] wrote:

> But MASS is huge! Hence the query. (I've also had occasion to filch
> single functions from other libraries as well).

MASS is *not* huge, and indeed is negligible compared to what is already
loaded.  nlme might be large, but few packages are noticeable and none are
huge 

Luke and I (but principally Luke) have been experimenting with 
load-on-demand for R objects, and indeed MASS already does that for its 
data objects.  It's possible that this will be a non-issue by the next 
non-patch release.

Some data (R 1.8.0 --vanilla)

> gc()
 used (Mb) gc trigger (Mb)
Ncells 416460 11.2 597831   16
Vcells 113224  0.9 7864326
> library(MASS)
> gc()
 used (Mb) gc trigger (Mb)
Ncells 463337 12.4 667722 17.9
Vcells 121995  1.0 786432  6.0

I'd fail any student who said `MASS was huge'.

I assume you don't have methods loaded if you are concerned about 
performance 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] strptime command in R

2003-10-28 Thread morozov
Hello all:

I have a column of times in format

   x
"16:30:00"
"16:30:03"
"16:59:00"
etc

which I need to convert into time variables and do some operations on.

I do the command y<-strptime(x,"%H:%M:%S"). This executes almost istantly (for 
a column x of length 1000 in Windows, but in Unix, where I run my production 
jobs, this takes over 4 minutes. I know that generally my Unix box is much 
more powerfull than my Win machine, and R runs generally faster on Unix, but 
this particular command is very very slow. Why is that? How can I speed that 
up without having to parse the strings by hand?

Thank you very much,
Vlad.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] R performance on Unix

2003-10-28 Thread morozov
Hi
I'm observing a huge difference in the performance speed of R on Windows and 
Unix, even though I know that my Unix machine is much more powerful than my 
Win machine. 
In particular, any character processing task is very time consuming on Unix.

strptime(x,"%H:%M:%S") is about 10 times slower on Unix for vector x of the 
length of ~ 500. read.table() also is very slow. is there any way to speed up 
these ?

Thanks a lot,
Vlad.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] 'levelplot' with an option 'at'

2003-10-28 Thread Martina Pavlicova

Hi again,

since the attached plots did not go through, I created a quick web-page,
where all the plots are posted. Here it is:

http://www.stat.ohio-state.edu/~pavlicov/levelplot/

Thanks again.

Martina Pavlicova
--
Department of Statistics Office Phone: (614) 292-1567
1958 Neil Avenue, 304E Cockins Hall  FAX: (614) 292-2096
The Ohio State UniversityE-mail: [EMAIL PROTECTED]
Columbus, OH 43210-1247  www.stat.ohio-state.edu/~pavlicov


On Tue, 28 Oct 2003, Martina Pavlicova wrote:

>
> Hi all,
> I encountered a difference between versions 1.6.1 and 1.7.0 when using
> levelplot with an option 'at'. Here are the specs of the two platforms
> used:
>
> > R.version
>  _
> platform sparc-sun-solaris2.8
> arch sparc
> os   solaris2.8
> system   sparc, solaris2.8
> status
> major1
> minor6.1
> year 2002
> month11
> day  01
> language R
>
> > R.version
>  _
> platform i686-pc-linux-gnu
> arch i686
> os   linux-gnu
> system   i686, linux-gnu
> status
> major1
> minor7.0
> year 2003
> month04
> day  16
> language R
>
>
> I created an easy example of two levelplots (one without an option 'at'
> and one with an option 'at') which I run through version 1.6.1. The plot
> are called:
> version161.without_at.jpg
> version161.with_at.jpg
>
> After update to version 1.7.0, I run the same two plots and I got the
> following files:
> version170.without_at.jpg
> version170.with_at.jpg
> When I don't include the option 'at' into a levelplot, the plots
> version161.without_at.jpg and version170.without_at.jpg are similar
> (differ only in labels for contours). BUT if I include the option 'at',
> version 1.7.0 produces very different picture which I believe is wrong
> (compare files version161.with_at.jpg and version170.with_at.jpg). WHY is
> that? Am I missing something?
>
> I have attached all 4 plots and also the commands I used to create the
> small example.
>
> Thank you for all your help.
>
> Martina Pavlicova
>
> -
> library(lattice)
>
> x <- row(matrix(NA,11,11))-6
> y <- col(matrix(NA,11,11))-6
> z <- x*y
>
> jpeg("version161.without_at.jpg")
> foo <- levelplot(z~x*y,contour=T, cuts=5,
>  ##at=c(-10,-5,0,5,10),
>  panel=function(x,y,...){
>panel.levelplot(x,y,  ...)
>  })
> print(foo)
> dev.off()
>
> jpeg("version161.with_at.jpg")
> foo <- levelplot(z~x*y,contour=T, cuts=5,
>  at=c(-10,-5,0,5,10),
>  panel=function(x,y,...){
>panel.levelplot(x,y,  ...)
>  })
> print(foo)
> dev.off()
>
>
>
> --
> Department of Statistics Office Phone: (614) 292-1567
> 1958 Neil Avenue, 304E Cockins Hall  FAX: (614) 292-2096
> The Ohio State UniversityE-mail: [EMAIL PROTECTED]
> Columbus, OH 43210-1247  www.stat.ohio-state.edu/~pavlicov
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] strptime command in R

2003-10-28 Thread Prof Brian Ripley
It would appear to mean that your `Unix' box's strptime is seriously 
broken, but you could profile R (the process, not R profiling) to find 
out.

You could try using package chron to convert the strings and then 
as.POSIXct, but that's inelegant at best.

On Tue, 28 Oct 2003, morozov wrote:

> Hello all:
> 
> I have a column of times in format
> 
>x
> "16:30:00"
> "16:30:03"
> "16:59:00"
> etc
> 
> which I need to convert into time variables and do some operations on.
> 
> I do the command y<-strptime(x,"%H:%M:%S"). This executes almost istantly (for 
> a column x of length 1000 in Windows, but in Unix, where I run my production 
> jobs, this takes over 4 minutes. I know that generally my Unix box is much 
> more powerfull than my Win machine, and R runs generally faster on Unix, but 
> this particular command is very very slow. Why is that? How can I speed that 
> up without having to parse the strings by hand?
> 
> Thank you very much,
> Vlad.
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> 
> 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] ifelse with a factor variable

2003-10-28 Thread Goran Brostrom
'ifelse' changes factors to character vectors (R-1.7.1, Linux):

> table(bal$soc.40)

  tax noble semi-landless  landless   unknown 
 4035  5449 13342  9348 0 


> blah <- ifelse(is.na(bal$soc.40), "unknown", bal$soc.40)
> table(blah)
blah
  1   2   3   4 unknown 
   40355449   1334293487970 

How do I get what I want (I mean: simply)? Upgrade to 1.8.0?

Göran

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] ifelse with a factor variable

2003-10-28 Thread Liaw, Andy
Does the following help you?

> x <- factor(c("A", "B", NA))
> levels(x) <- c(levels(x), "unknown")  # add an "unknown" level
> x[is.na(x)] <- "unknown"  # change NAs to "unknown"
> x
[1] A   B   unknown
Levels: A B unknown

Andy

> -Original Message-
> From: Goran Brostrom [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, October 28, 2003 3:36 PM
> To: [EMAIL PROTECTED]
> Subject: [R] ifelse with a factor variable
> 
> 
> 'ifelse' changes factors to character vectors (R-1.7.1, Linux):
> 
> > table(bal$soc.40)
> 
>   tax noble semi-landless  landless   unknown 
>  4035  5449 13342  9348 0 
> 
> 
> > blah <- ifelse(is.na(bal$soc.40), "unknown", bal$soc.40)
> > table(blah)
> blah
>   1   2   3   4 unknown 
>40355449   1334293487970 
> 
> How do I get what I want (I mean: simply)? Upgrade to 1.8.0?
> 
> Göran
> 
> __
> [EMAIL PROTECTED] mailing list 
> https://www.stat.math.ethz.ch/mailman/listinfo> /r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] ifelse with a factor variable

2003-10-28 Thread Goran Brostrom
* Liaw, Andy <[EMAIL PROTECTED]> [2003-10-28 21:55]:
> Does the following help you?
> 
> > x <- factor(c("A", "B", NA))
> > levels(x) <- c(levels(x), "unknown")  # add an "unknown" level
> > x[is.na(x)] <- "unknown"  # change NAs to "unknown"
> > x
> [1] A   B   unknown
> Levels: A B unknown

Yes! Thank you,

Göran

> 
> Andy
> 
> > -Original Message-
> > From: Goran Brostrom [mailto:[EMAIL PROTECTED] 
> > Sent: Tuesday, October 28, 2003 3:36 PM
> > To: [EMAIL PROTECTED]
> > Subject: [R] ifelse with a factor variable
> > 
> > 
> > 'ifelse' changes factors to character vectors (R-1.7.1, Linux):
> > 
> > > table(bal$soc.40)
> > 
> >   tax noble semi-landless  landless   unknown 
> >  4035  5449 13342  9348 0 
> > 
> > 
> > > blah <- ifelse(is.na(bal$soc.40), "unknown", bal$soc.40)
> > > table(blah)
> > blah
> >   1   2   3   4 unknown 
> >40355449   1334293487970 
> > 
> > How do I get what I want (I mean: simply)? Upgrade to 1.8.0?
> > 
> > Göran
> > 
> > __
> > [EMAIL PROTECTED] mailing list 
> > https://www.stat.math.ethz.ch/mailman/listinfo> /r-help
> > 
> 

-- 
 Göran Broströmtel: +46 90 786 5223
 Department of Statistics  fax: +46 90 786 6614
 Umeå University   http://www.stat.umu.se/egna/gb/
 SE-90187 Umeå, Sweden e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] 'levelplot' with an option 'at'

2003-10-28 Thread Martina Pavlicova

Thank to Deepayan Sarkar I can report that the levelplot works properly
at 1.8.0 (I dont know more specs). Prof. Riply correctly noted that I
did not mention what version of 'lattice' library I am using. Here is
more info about the lattice library installed for each version of R.

I understand that the easiest solution is to forget about the problem
and update R and also lattice library, but because of the set-up on our
system, that is not possible for me in the near future.

I appreciate all your help and comments. Thank you.

Martina Pavlicova

R 1.6.1
---
Package: lattice
Version: 0.6-8
Date: 2002-12-22
Priority: recommended
Title: Lattice Graphics
Author: Deepayan Sarkar <[EMAIL PROTECTED]>
Maintainer: Deepayan Sarkar <[EMAIL PROTECTED]>
Description: Implementation of Trellis Graphics
Depends: R (>= 1.6.0), grid (>= 0.7), modreg
License: GPL version 2 or later
Built: R 1.6.1; sparc-sun-solaris2.8; Thu Jan 2 16:12:59 EST 2003

R 1.7.0
---
Package: lattice
Version: 0.7-11
Date: 2003/04/10
Priority: recommended
Title: Lattice Graphics
Author: Deepayan Sarkar <[EMAIL PROTECTED]>
Maintainer: Deepayan Sarkar <[EMAIL PROTECTED]>
Description: Implementation of Trellis Graphics
Depends: R (>= 1.7.0), grid (>= 0.7), modreg
License: GPL version 2 or later
Built: R 1.7.0; i686-pc-linux-gnu; 2003-04-17 13:07:31



On Tue, 28 Oct 2003, Martina Pavlicova wrote:

>
> Hi again,
>
> since the attached plots did not go through, I created a quick web-page,
> where all the plots are posted. Here it is:
>
> http://www.stat.ohio-state.edu/~pavlicov/levelplot/
>
> Thanks again.
>
> Martina Pavlicova
> --
> Department of Statistics Office Phone: (614) 292-1567
> 1958 Neil Avenue, 304E Cockins Hall  FAX: (614) 292-2096
> The Ohio State UniversityE-mail: [EMAIL PROTECTED]
> Columbus, OH 43210-1247  www.stat.ohio-state.edu/~pavlicov
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R performance on Unix

2003-10-28 Thread Jason Turner
morozov wrote:

Hi
I'm observing a huge difference in the performance speed of R on Windows and 
Unix, even though I know that my Unix machine is much more powerful than my 
Win machine. 
In particular, any character processing task is very time consuming on Unix.

strptime(x,"%H:%M:%S") is about 10 times slower on Unix for vector x of the 
length of ~ 500. read.table() also is very slow. is there any way to speed up 
these ?

As was pointed out, the strptime issue sounds like a C-library problem 
on your machine.  The text processing might be the same.  Is this a 
development version of a Un*x OS?  What's the output of uname -a?  Is it 
a source or binary build of R?  If binary, try building and installing 
from source (provided you've got the disk space, you can do this, even 
if you're not root).  Is the problem apparent on other boxen with 
similar OSs (ie is it hardware)?

If you absolutely must continue to use that box and OS and R build, 
read.table() can be sped up using the colClasses argument (R doesn't 
have to guess what class each column should be), but it sounds like a 
problematic installation.

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] POSIX

2003-10-28 Thread Erin Hodgess
Dear R People:

I have a question about POSIX.  Is this an acronym for something?

If I want to refer to this in a paper, what is the proper way to do so, 
please?

Thanks so much!

Sincerely,
Erin Hodgess
mailto: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] POSIX

2003-10-28 Thread Dirk Eddelbuettel
On Tue, Oct 28, 2003 at 04:48:28PM -0600, Erin Hodgess wrote:
> I have a question about POSIX.  Is this an acronym for something?

http://www.google.com/search?q=POSIX

-- 
Those are my principles, and if you don't like them... well, I have others.
-- Groucho Marx

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] R with C code

2003-10-28 Thread solares

Hello my question is if a exists a command in R that permit to execute a 
code written in C, as for example the command "exec" of tcl that permits to 
execute a code of c inside tcl.  Thanks Ruben

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R with C code

2003-10-28 Thread Ben Bolker

  See "Writing R Extensions" in the documentation that comes with R ...

On Tue, 28 Oct 2003 [EMAIL PROTECTED] wrote:

> 
> Hello my question is if a exists a command in R that permit to execute a 
> code written in C, as for example the command "exec" of tcl that permits to 
> execute a code of c inside tcl.  Thanks Ruben
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> 

-- 
620B Bartram Hall[EMAIL PROTECTED]
Zoology Department, University of Floridahttp://www.zoo.ufl.edu/bolker
Box 118525   (ph)  352-392-5697
Gainesville, FL 32611-8525   (fax) 352-392-3704

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] R with C code

2003-10-28 Thread Simon Blomberg
?.C

Cheers,

Simon.

Simon Blomberg, PhD
Depression & Anxiety Consumer Research Unit
Centre for Mental Health Research
Australian National University
http://www.anu.edu.au/cmhr/
[EMAIL PROTECTED]  +61 (2) 6125 3379


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, 29 October 2003 10:08 AM
> To: [EMAIL PROTECTED]
> Subject: [R] R with C code
> 
> 
> 
> Hello my question is if a exists a command in R that permit 
> to execute a 
> code written in C, as for example the command "exec" of tcl 
> that permits to 
> execute a code of c inside tcl.  Thanks Ruben
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R with C code

2003-10-28 Thread Paulo Justiniano Ribeiro Jr
I believe you are looking for the .C() function
For more details
read its documentation and the "Writing R extensions Manual"


On Tue, 28 Oct 2003 [EMAIL PROTECTED] wrote:

>
> Hello my question is if a exists a command in R that permit to execute a
> code written in C, as for example the command "exec" of tcl that permits to
> execute a code of c inside tcl.  Thanks Ruben
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>
>

Paulo Justiniano Ribeiro Jr
Departamento de Estatística
Universidade Federal do Paraná
Caixa Postal 19.081
CEP 81.531-990
Curitiba, PR  -  Brasil
Tel: (+55) 41 361 3471
Fax: (+55) 41 361 3141
e-mail: [EMAIL PROTECTED]
http://www.est.ufpr.br/~paulojus

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] ts vs. POSIX

2003-10-28 Thread Erin Hodgess
OK.

What if I have a time series which is collected every Monday, please?

What is the proper way to use the start option within the ts command
in order to indicate that this is Monday data, please?

Thanks again!

Sincerely,
Erin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] ts vs. POSIX

2003-10-28 Thread Jason Turner
Erin Hodgess wrote:
What if I have a time series which is collected every Monday, please?

What is the proper way to use the start option within the ts command
in order to indicate that this is Monday data, please?
ts objects don't directly support dates.  There is some provision for 
monthly data, but this isn't the same as uniform, across-the-board date 
support.  What they do have is a start time, a deltat, and frequency 
(observations per period).  The main reason to use ts objects *isn't* 
the date/time handling, but for the nice functions (acf, spectrum, etc) 
you can use for regularly spaces time samples.

For weekly data, I'd use one of the following approaches (assuming the 
series starts in the first Monday of 2003):

1)
# dates in a year,week format
> foo <- ts(1:100,start=c(2003,1),frequency=52)
or

2)
# dates as numeric representation of POSIXct objects
foo <- ts(1:100,start=as.numeric(as.POSIXct("2003-1-6")),deltat=60*60*24*7)
> start(foo)
[1] 1041764400
> end(foo)
[1] 1101639600
> last <- end(foo)
> class(last) <- "POSIXct"
> last
[1] "2004-11-29 New Zealand Daylight Time"
(2) depends on as.numeric(POSIXct.object) giving a sensible, 
single-digit answer.  This is not guaranteed.  It works today, but 
nobody promised this approach would work tomorrow.

Hope that helps.

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] lm.fit glitch

2003-10-28 Thread Shadi Barakat
Hello all,

I've seen this error posted before, but no hints to its origin. Does
anyone have any ideas?


Error in lm.fit(x, y, offset = offset, ...) :
0 (non-NA) cases 


Thx

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] ts vs. POSIX

2003-10-28 Thread Achim Zeileis
On Wednesday 29 October 2003 00:28, Erin Hodgess wrote:

> OK.
>
> What if I have a time series which is collected every Monday,
> please?
>
> What is the proper way to use the start option within the ts command
> in order to indicate that this is Monday data, please?

In ts you can just generate regularly spaced time series. As the help 
page states "start" is either a single number or a vector of two 
integers (tyipically years and months or something similar).

So to indiciate that you have monday data you have to store this 
meta-information in the name of the ts object or the man page, for 
example.

An alternative is to use irregularly spaced time series as provided by 
irts() in package tseries or its() in package its. Then you always 
store a full vector of POSIXct times at which the observations were 
made. Both approaches have certain advantages and 
disadvantages...depends on what you want to do with the data.

hth,
Z

> Thanks again!
>
> Sincerely,
> Erin
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] lm.fit glitch

2003-10-28 Thread Spencer Graves
I can replicate it as follows: 

> (DF <- data.frame(x=c(1:2, NA, NA), y=c(NA, NA, 3:4)))
  x  y
1  1 NA
2  2 NA
3 NA  3
4 NA  4
> lm(y~x, DF, na.action=na.omit)
Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :
   0 (non-NA) cases
does this help?  spencer graves

Shadi Barakat wrote:

Hello all,

I've seen this error posted before, but no hints to its origin. Does
anyone have any ideas?

Error in lm.fit(x, y, offset = offset, ...) :
   0 (non-NA) cases 


Thx

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Confidence ellipse for correlation

2003-10-28 Thread John Fox
Dear Tanya,

I believe that the data.ellipse function in the car package will do what 
you want (but note that the probability contours are for a bivariate-normal 
distribution, and hence won't necessarily enclose approximately the 
specified proportion of the data).

I hope that this helps,
 John
At 01:57 PM 10/28/2003 -0500, you wrote:
Hello,

SAS' point and click interface has the option of produce a scatterplot with a
superimposed confidence ellipse for the correlation coefficient. Since I
generally like R so much better, I would like to reproduce this in R. I've
been playing with the ellipse package. In order to have the points and the
ellipse on the same graph I've done the following.
(Load ellipse package...)
> data(Puromycin)
> attach(Puromycin)
> my<-mean(rate)
> mx<-mean(conc)
> sdy<-sd(rate)
> sdx<-sd(conc)
> r<-cor(conc,rate)
> plot(ellipse(r,scale=c(sdx,sdy),centre=c(mx,my)),type='l')
> points(conc,rate)
1) Is my use of 'scale' and 'centre' theoretically correct?
2) Is there a more efficient way to get the 5 parameters? (I guess I could
write a little function, but has it already been done?)
The non-linear relationship between these variables brings up another point:
Is there a way to plot a contour (empirical?) containing, say, 95% of the
values.
Thanks for your time!

Tanya
-
John Fox
Department of Sociology
McMaster University
Hamilton, Ontario, Canada L8S 4M4
email: [EMAIL PROTECTED]
phone: 905-525-9140x23604
web: www.socsci.mcmaster.ca/jfox
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] rank function

2003-10-28 Thread "김신"




Hello! I have a question on rank function that i'm working on now. 
Even though my English i not good, I hope you understand what i'm asking for. It 
is a program that i made
(It must not to use the function from the R)
##
data<-sample(c(1:100),10)
rank.data <- rep(0,length(data)) 
for(i in 1:length(data)){ 
  for(j in 1:length(data)){ 
if(data[i] test.data 
 [1] 97 25 90 76 85 32 79  8 39 35
> rank.data 
 [1]  3  9  7  4  6 95  1  1  1  5
> rank(test.data) 
 [1] 10  2  9  6  8  3  7  1  5  4
> 
I added it to R after i copied the sources then error occured instead of the result 
that i wanted. 
How can i get the correct results?  And how can i correct second source?
[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help