Re: [R] question on FARIMA innovations&In-Reply-To=<[EMAIL PROTECTED]>

2008-09-04 Thread Tony Cricenti
Hi Ferebee,
I am also working on a similar problem. Have you solved it? I am trying to 
write a function to to determine the innovations.
However, I think that the Farma package may do this. I haven't succeeded in 
getting Farma to work. It requires OX.
Can you let me know if you have made any progress
 
Tony Cricenti
 
Regards
A.L. Cricenti,   
Academic Leader Telecommunications 
Faculty of Information and Communication Technologies
Swinburne University of Technology, 
PO Box 218, Hawthorn, Vic 3122, Australia. 
email: [EMAIL PROTECTED] 
Phone: (voice) +61-3-9214 5506   (fax) +61-3-9819 6443

-
Swinburne University of Technology
CRICOS Provider Code: 00111D

NOTICE
This e-mail and any attachments are confidential and int...{{dropped:18}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] reading and parsing gzipped files

2008-09-04 Thread Dmitriy Skvortsov
Hi all,I have  large  compressed text tab delimited  files,
I am trying to write efficient function to read them,
I am using   gzfile()  and readLines()

zz <- gzfile("exampl.txt.gz", "r")  # compressed file
system.time(temp1<-readLines(zz ))
close(zz)

which work fast, and create vector of strings.
The problem is to parse the result, if  I use strsplit  it takes longer then
decompress file manually , read it with scan and erase it.

Can anybody recommend  an efficient way of parsing large vector  ~200,000
entries
Dmitriy

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Derived classes

2008-09-04 Thread Gabor Grothendieck
Try this:

# constructor for Square
makeSquare <- function(name, side) structure(list(name = name, side =
side), class = c("Square", "Shape"))

# generic
getName <- function(shape) UseMethod("getName")
# Shape method
getName.Shape <- function(shape) shape$name

# generic
getArea <- function(shape) UseMethod("getArea")
# Square method
getArea.Square <- function(shape) shape$side ^ 2

sq <- makeSquare("a", 10)
getName(sq)  # inherits getName method from Shape class
getArea(sq) # uses getArea method for Square class

On Thu, Sep 4, 2008 at 11:56 PM,  <[EMAIL PROTECTED]> wrote:
> Coming from a C++ and C# background I would like to know how inheritance 
> works with 'R'. The classical example is I can define an abstract class 
> 'Shape' and have an array of 'Shape's but each instance could be a Circle, 
> Square, Triangle, etc. because they all derive from 'Shape'. At runtime if I 
> wish I can tell the type of the 'Shape' using various language specific 
> operators. In 'R' it seems that each 'function' in general returns an object 
> that is specific to that function. For example 'stl' returns an 'stl' object, 
> 'lm' retuirns an 'lm' object, 'kmeans' returns a 'kmens' object etc. If I 
> have an array of these objects what can I use to tell the type of object in 
> 'R' and runtime? Any examples would be helpful.
>
> Thank you.
>
> Kevin
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to write a escape sequences in to a file

2008-09-04 Thread Kurapati, Ravichandra (Ravichandra)
 

Hi

 

   Str1<-"hai "

   Str2<-"hru"

 

  I want to write these 2 strings in a file  separated by
newline.

 

 

   How can I get this.

Thanks

K.Ravichandra

 

 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] algorithm to create unique identifiers

2008-09-04 Thread Christos Hatzis
It might be an overkill for what you need, but take a look at package Ruuid.

-Christos 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Ralph S.
> Sent: Thursday, September 04, 2008 11:44 PM
> To: r-help@r-project.org
> Subject: [R] algorithm to create unique identifiers
> 
> 
> Hi all,
> 
> I am trying to create a unique identifier for each row, 
> combining numbers from three columns.
> 
> Do you know if there is a general formula to do this (or some 
> manual where I can read about this)?
> 
> I figure I can use the numeric entries of the columns as 
> "coordinates" and multiply them with different coefficients 
> (different magnitudes) to get the unique ID - but it would be 
> nice to read about such algorithms in general.
> 
> Any links/input would be great -
> 
> Ralph
> 
> _
> 
> e.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Trouble with R CMD check: I can't seem to get dependencies right (maybe I'm using R_LIBS incorrectly?)

2008-09-04 Thread Dylan Arena
Hi there,


I'm in the following directory:

~/Documents/Rstuff/diceFiles/dice_1.1

The directory "dice" is in this directory, with all the usual build
files (DESCRIPTION, NAMESPACE, etc).  I'm trying to run the following
command:

R CMD check dice

(where "dice" is the name of the package I'm checking), and I get the following:

* checking package dependencies ... ERROR
Packages required but not available:
  gtools

My DESCRIPTION file does (correctly) list gtools as a dependency, so
I'm happy this check is ensuring its availability.  What I can't
figure out is how to get the check command to see that gtools is in
fact installed.  I've created a file called .Renviron in my home
directory that has the following line in it:

R_LIBS=~/Documents/Rstuff/library

And in the directory specified by R_LIBS above is the gtools package.
(I've also got a file called .Rprofile in my home directory, with the
line .libPaths("~/Documents/Rstuff/library")
in it.)  My reading of the R-intro manual is that setting R_LIBS in
this way should allow the R CMD check command to search in the
directory specified by R_LIBS when looking for gtools.  I'm pretty
sure at this point, though, that my understanding is incorrect.

At this point I'm in over my depth.  I'm writing because I'm hopeful
that to someone on this list my problem is trivially solvable.  My
basic question is: what must I do to get R CMD check to find the
gtools package?


Please let me know any ideas you might have for me,
Dylan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] library/function that estimates parameters of well known distributions from empirical data?

2008-09-04 Thread Ted Byers

I found this a few months ago, but for the life of me I can't remember what
the function or package was, and I have had no luck finding it this week.

I have found, again, the functions for working with distributions like
Cauchy, F, normal, &c., and ks.test, but I have not found the functions for
estimating the distribution parameters given a vector of values.

What I need to do is estimate the distribution parameters for each candidate
distribution, and then test to see which gives the best fit to the data.

I want to examine the question, given this dataset (which may have thousands
of records), does the normal or cauchy distribution fit the data best, and
which what parameters.  It will not be known a priori whether or not the
most appropriate distribution is non-central, though we do know that often
(not always) values of medium size in absolute value are more often positive
than negative and that very large values are more often negative than
positive.

Could someone please give me a gentle reminder of the package and
function(s) I ought to be examining?

Thanks

Ted
-- 
View this message in context: 
http://www.nabble.com/library-function-that-estimates-parameters-of-well-known-distributions-from-empirical-data--tp19323700p19323700.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Maximum likelihood estimation

2008-09-04 Thread toh

Yes I'm trying to optimize the parameters a, b, p and lambda where a > 0, b >
0 and 0 < p < 1. I attached the error message that I got when I run mle. 


> t <- c(1:90)
> y <-
> c(5,10,15,20,26,34,36,43,47,49,80,84,108,157,171,183,191,200,204,211,217,226,230,
+
234,236,240,243,252,254,259,263,264,268,271,277,290,309,324,331,346,367,375,381,
+
401,411,414,417,425,430,431,433,435,437,444,446,446,448,451,453,460,463,463,464,
+
464,465,465,465,466,467,467,467,468,469,469,469,469,470,472,472,473,473,473,473,
+ 473,473,473,475,475,475,475)
> 
> library(stats4)
> fr <- function(a, b, p, lambda){
+ l <- 0.5*(lambda + b*p + (1-p)*(lambda-b))
+ l^2 > lambda*b*p
+ delta <- sqrt(abs(l^2 - b*p*lambda))
+ mt <- a/p*(1-exp(-l*t)*cosh(delta*t)-(l-b*p)*exp(-t)*sinh(delta*t)/delta)
+ logl <- sum(diff(y)*log(diff(mt))-diff(mt)-lfactorial(diff(y)))
+ return(-logl)
+ }
> 
> mle(start=list(a=12,b=0.01,p=0.5,lambda=0.01),fr, method="L-BFGS-B", 
+ lower = c(0.002, 0.002, 0.002, 0.002), upper = c(Inf, Inf, 0.999,
Inf),control=list(fnscale=-1))
Error in optim(start, f, method = method, hessian = TRUE, ...) : 
  non-finite finite-difference value [3]












Prof Brian Ripley wrote:
> 
>>From ?optim
> 
>fn: A function to be minimized (or maximized), with first
>argument the vector of parameters over which minimization is
>to take place.  It should return a scalar result.
> 
> I think you intended to optimize over c(a,b,p, lambda), so you need to 
> specify them as a single vector.
> 
> You may be making life unnecessarily hard for yourself: see function mle() 
> in package stats4.
> 
> Showing your code without a verbal description of what you are doing nor 
> the error message you got is less helpful than we need.
> 
> On Wed, 3 Sep 2008, toh wrote:
> 
>>
>> Hi R-experts,
>> I'm new to R in mle. I tried to do the following but just couldn't get it
>> right. Hope someone can point out the mistakes. thanks a lot.
>>
>> t <- c(1:90)
>> y <-
>> c(5,10,15,20,26,34,36,43,47,49,80,84,108,157,171,183,191,200,204,211,217,226,230,
>>
>> 234,236,240,243,252,254,259,263,264,268,271,277,290,309,324,331,346,367,375,381,
>>
>> 401,411,414,417,425,430,431,433,435,437,444,446,446,448,451,453,460,463,463,464,
>>
>> 464,465,465,465,466,467,467,467,468,469,469,469,469,470,472,472,473,473,473,473,
>>  473,473,473,475,475,475,475)
>> fr <- function(a, b, p, lambda){
>> l <- 0.5*(lambda + b*p + (1-p)*(lambda-b))
>> l^2 > lambda*b*p
>> delta <- sqrt(abs(l^2 - b*p*lambda))
>> mt <- a/p*(1-exp(-l*t)*cosh(delta*t)-(l-b*p)*exp(-t)*sinh(delta*t)/delta)
>> logl <- sum(diff(y)*log(diff(mt))-diff(mt)-lfactorial(diff(y)))
>> return(-logl)
>> }
>> optim(c(15,0.01,0.5,0.01),fr, method="L-BFGS-B",
>> lower = c(0.002, 0.002, 0.002, 0.002), upper = c(Inf, Inf, 0.999,
>> Inf),control=list(fnscale=-1))
>>
>> -- 
>> View this message in context:
>> http://www.nabble.com/Maximum-likelihood-estimation-tp19304249p19304249.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
> 
> -- 
> Brian D. Ripley,  [EMAIL PROTECTED]
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Maximum-likelihood-estimation-tp19304249p19323140.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] smoothing with the Gaussian kernel

2008-09-04 Thread dxc13

useR's,

Does anyone know what function or package allows one to specify use of the
Gaussian kernel to smooth a data set, or is this just the default for most
functions?

Thanks,
dxc13
-- 
View this message in context: 
http://www.nabble.com/smoothing-with-the-Gaussian-kernel-tp19323294p19323294.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] algorithm to create unique identifiers

2008-09-04 Thread Henrik Bengtsson
On Thu, Sep 4, 2008 at 8:44 PM, Ralph S. <[EMAIL PROTECTED]> wrote:
>
> Hi all,
>
> I am trying to create a unique identifier for each row, combining numbers 
> from three columns.
>
> Do you know if there is a general formula to do this (or some manual where I 
> can read about this)?
>
> I figure I can use the numeric entries of the columns as "coordinates" and 
> multiply them with different coefficients (different magnitudes) to get the 
> unique ID - but it would be nice to read about such algorithms in general.

What are you numbers?  Are they in a fixed range?  Integers or reals?
If fixed range integers, it is easy.  Think regular numerical
representation, e.g. binary, octadecimal, decimal and hexadecimal.

For a more generic solution that works with any data types, see e.g.
MD5 [http://en.wikipedia.org/wiki/MD5].  It is not guaranteed to
generated unique codes, but it is extremely rare that two different
inputs gives the same MD5 code.  MD5 (and others) are implemented in
the 'digest' packages, e.g.

> library(digest)
> digest(list(a=1, b=list(1:10, c=letters)))
[1] "73e0ae066a97bfff7f79d41c65b55fde"

My $.02

/Henrik


>
> Any links/input would be great -
>
> Ralph
>
> _
>
> e.
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Derived classes

2008-09-04 Thread rkevinburton
Coming from a C++ and C# background I would like to know how inheritance works 
with 'R'. The classical example is I can define an abstract class 'Shape' and 
have an array of 'Shape's but each instance could be a Circle, Square, 
Triangle, etc. because they all derive from 'Shape'. At runtime if I wish I can 
tell the type of the 'Shape' using various language specific operators. In 'R' 
it seems that each 'function' in general returns an object that is specific to 
that function. For example 'stl' returns an 'stl' object, 'lm' retuirns an 'lm' 
object, 'kmeans' returns a 'kmens' object etc. If I have an array of these 
objects what can I use to tell the type of object in 'R' and runtime? Any 
examples would be helpful.

Thank you.

Kevin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] algorithm to create unique identifiers

2008-09-04 Thread Ralph S.

Hi all,

I am trying to create a unique identifier for each row, combining numbers from 
three columns.

Do you know if there is a general formula to do this (or some manual where I 
can read about this)?

I figure I can use the numeric entries of the columns as "coordinates" and 
multiply them with different coefficients (different magnitudes) to get the 
unique ID - but it would be nice to read about such algorithms in general.

Any links/input would be great -

Ralph

_

e.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help for color parameter

2008-09-04 Thread Zheng Lu


Dear all: 

I attached the dataset with this email and post codes as  below.  

My questions is related to the 'col=temp.col' for the line and pch in my code, 
I have 4 IDs, 10 DIDs and each ID include different DIDs, for example, first ID 
has 3 DIDs, then the color is the first three colors(black, red, green) in the 
first plot, but in the second plot, why the color change to pink which is 6th 
one, I actually expect the color is blue which is the 4th one, same problems 
for the rest of plots. I am confused about the mechnism of picking color in R 
based on my code, how can I specify the color I want in this situation and 
code. Basically, I would like to have 10 different colors of line and pch(line 
and pch will be in same color) for each combination of ID and DID. I expected 
first 10 colors in R for this plot initially. 

I appreciate your instructions. 

 dogpbmc=read.table('E:\\MPI\\4924\\PBMC-PKPD\\mpbmc.tab', skip=1, header=T)
dogpbmc=dogpbmc[dogpbmc$MDV!=1,]
par(mfrow=c(2,2),mar=c(5,5,4,4))
# E vs t for simultaneous modeling
temp.col=0
for (i in unique(dogpbmc$ID)){
j=dogpbmc$ID==i
plot(dogpbmc$TIME[j], dogpbmc$DV[j], type='n', 
xlim=c(0,max(dogpbmc$TIME[j&dogpbmc$CMT==4])+1), ylim=c(0,250),xlab='Time 
(hr)', ylab='PD', cex=0.9)
for (k in unique(dogpbmc$DID)){
 m=dogpbmc$DID==k
temp.col=temp.col+1
#points(dogpbmc$TIME[j&dogpbmc$CMT==4, dogpbmc$PRED[j&dogpbmc$CMT==4], 
type='l', col=2,lwd=3,lty=1,cex=0.9)
points(dogpbmc$TIME[j&m],dogpbmc$DV[j&m], type='p', col=temp.col, pch=16, 
lwd=3,lty=1,cex=0.9)
points(dogpbmc$TIME[j&m],dogpbmc$IPRE[j&m], type='l', 
col=temp.col,lwd=3,lty=1,cex=0.9)
title(paste('ID #',i,sep=''))
}
}

Zheng 
TABLE NO.  1
  ID DIDTIME   YEVID MDV
 CMTIPRE  DVPREDRES WRES
  5.E+00  1.E+00  0.E+00  1.1166E+02  1.E+00  1.E+00  
4.E+00  0.E+00  0.E+00  1.1166E+02  0.E+00  0.E+00
  5.E+00  1.E+00  0.E+00  1.1166E+02  1.E+00  1.E+00  
1.E+00  0.E+00  0.E+00  1.1166E+02  0.E+00  0.E+00
  5.E+00  1.E+00  0.E+00  1.1166E+02  0.E+00  0.E+00  
4.E+00  1.1166E+02  7.8850E+01  1.1166E+02 -3.2806E+01 -6.7720E-01
  5.E+00  1.E+00  0.E+00  1.1166E+02  0.E+00  0.E+00  
4.E+00  1.1166E+02  1.0840E+02  1.1166E+02 -3.2556E+00 -6.7205E-02
  5.E+00  1.E+00  0.E+00  1.1166E+02  0.E+00  0.E+00  
4.E+00  1.1166E+02  1.1273E+02  1.1166E+02  1.0744E+00  2.2178E-02
  5.E+00  1.E+00  5.E-01  1.9009E+01  0.E+00  0.E+00  
4.E+00  1.9009E+01  2.4130E+01  3.4494E+01 -1.0364E+01  6.0315E-02
  5.E+00  1.E+00  5.E-01  1.9009E+01  0.E+00  0.E+00  
4.E+00  1.9009E+01  2.8860E+01  3.4494E+01 -5.6340E+00  3.7637E-01
  5.E+00  1.E+00  5.E-01  1.9009E+01  0.E+00  0.E+00  
4.E+00  1.9009E+01  2.9340E+01  3.4494E+01 -5.1540E+00  4.0845E-01
  5.E+00  1.E+00  1.E+00  3.0556E+01  0.E+00  0.E+00  
4.E+00  3.0556E+01  2.0630E+01  4.2776E+01 -2.2146E+01 -8.5963E-01
  5.E+00  1.E+00  1.E+00  3.0556E+01  0.E+00  0.E+00  
4.E+00  3.0556E+01  2.0690E+01  4.2776E+01 -2.2086E+01 -8.5639E-01
  5.E+00  1.E+00  2.E+00  6.5281E+01  0.E+00  0.E+00  
4.E+00  6.5281E+01  1.0266E+02  6.3106E+01  3.9554E+01  1.2369E+00
  5.E+00  1.E+00  2.E+00  6.5281E+01  0.E+00  0.E+00  
4.E+00  6.5281E+01  7.5430E+01  6.3106E+01  1.2324E+01  2.4235E-01
  5.E+00  1.E+00  2.E+00  6.5281E+01  0.E+00  0.E+00  
4.E+00  6.5281E+01  2.1432E+02  6.3106E+01  1.5121E+02  5.3152E+00
  5.E+00  1.E+00  4.E+00  1.0611E+02  0.E+00  0.E+00  
4.E+00  1.0611E+02  1.2050E+02  9.4971E+01  2.5529E+01  2.9548E-01
  5.E+00  1.E+00  4.E+00  1.0611E+02  0.E+00  0.E+00  
4.E+00  1.0611E+02  1.3167E+02  9.4971E+01  3.6699E+01  5.6657E-01
  5.E+00  1.E+00  8.E+00  1.1146E+02  0.E+00  0.E+00  
4.E+00  1.1146E+02  5.1100E+01  1.1057E+02 -5.9471E+01 -1.2881E+00
  5.E+00  1.E+00  8.E+00  1.1146E+02  0.E+00  0.E+00  
4.E+00  1.1146E+02  1.2913E+02  1.1057E+02  1.8559E+01  3.3847E-01
  5.E+00  1.E+00  8.E+00  1.1146E+02  0.E+00  0.E+00  
4.E+00  1.1146E+02  1.2341E+02  1.1057E+02  1.2839E+01  2.1923E-01
  5.E+00  1.E+00  2.4000E+01  1.1161E+02  0.E+00  0.E+00  
4.E+00  1.1161E+02  1.4031E+02  1.1163E+02  2.8676E+01  5.9197E-01
  5.E+00  1.E+00  2.4000E+01  1.1161E+02  0.E+00  0.E+00  
4.E+00  1.1161E+02  8.8570E+01  1.1163E+02 -2.3064E+01 -4.7629E-01
  5.E+00  1.E+00  2.4000E+01  1.1161E+02  0.E+00  0.E+00  
4.E+00  1.1161E+02  1.3479E+02  1.1163E+02  2.3156E+01  4.7800E-01
  5.E+00  2.E+00  0.E+00  0.E+00  3.E+00  1.E+00  
1.E+00  0.E+00  0.E

Re: [R] isoMDS and dist

2008-09-04 Thread stephen sefick
I don't know for sure about isoMDS, but package vegan is a good place
to start with a very good viginette.

Stephen

On Thu, Sep 4, 2008 at 5:02 PM, Carolyn A. Pucko <[EMAIL PROTECTED]> wrote:
>
> I am starting with a matrix in which rows are vegetation plots and columns
> are various characteristics including ID# and elevation. I removed elevation
> and ID columns to avoid having those characteristics influence the distances
> between points which I calculated using the "dist" command. The resulting
> distance file was then used in isoMDS.
>
> What I want to know is whether I can reattach the ID and elevation onto the
> point location information given from isoMDS? or is it possible that the
> order given in the isoMDS output is not the same as the order of the
> original data used to calculate distances?
>
> Thanks,
> Carrie Pucko
> University of Vermont
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Stephen Sefick
Research Scientist
Southeastern Natural Sciences Academy

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to deal with NaN in "boot" object

2008-09-04 Thread Jinsong Zhao
Hi there,

I use boot() to do bootstrap simulation on my statistic, I get a boot
object, named "obj". For certain reasons, there are some "NaN" in obj$t.
Now, I hope to get confidence interval using boot.ci(), it give the
following error:

Error in if (const(t, min(1e-08, mean(t)/1e+06))) { :
  missing value where TRUE/FALSE needed

Now, what I could do is I have to edit the obj by hand.

Any suggestions? Thanks in advanced!

Best wishes,
Jinsong

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] (with subject)

2008-09-04 Thread Gabor Grothendieck
I assume the problem is that you want the axis to have all 12
months but your data is much shorter.  Try this:

mos <- seq(as.Date("2008-01-01"), length = 12, by = "month")
plot(range(mos), range(rawData$y), type = "n", xaxt = "n")
lines(rawData$Date, rawData$y)
axis(1, mos, month.abb)


On Thu, Sep 4, 2008 at 3:15 PM, Dr Eberhard W Lisse <[EMAIL PROTECTED]> wrote:
> Dirk, Gabor,
>
> thanks for your advice, I have now tried to study these
> vignettes, but I must say, as an aging Gynecologist,
> I am facing an enormous learning curve :-)-O.
>
> This works nicely:
>
> rawData <- data.frame(x = c("2008-03-01", "2008-03-21", "2008-03-23",
> "2008-04-08", "2008-04-20", "2008-05-10", "2008-06-20"), y = c(4, 6, 8, 5,
> 7, 2 ,1))
> rawData[,"x"] <- as.Date(rawData[,"x"])
> with(rawData, plot(x, y, main="Some text", type='l'))
>
> How do I make the X-Axis start on 2008-01-01, end on 2008-12-31,
> with tick marks for every month (ie month.abb[1:12] or something
> similar)?
>
> A pointer is enough, weekend is coming up and I am off :-)-O
>
> I also need to add 2 lines statements, but I reckon I can figure that out
> by myself :-)-O
>
>
> greetings, el
> --
> Dr. Eberhard W. Lisse  \/ Obstetrician & Gynaecologist (Saar)
> [EMAIL PROTECTED] el108-ARIN /   *   |   Telephone: +264 81 124 6733 (cell)
> PO Box 8421 \ / Please send DNS/NA-NiC related e-mail
> Bachbrecht, Namibia ;/ to [EMAIL PROTECTED]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] modeling interval data, a.k.a. irregular timeseries

2008-09-04 Thread Rolf Turner


It seems to me you that have a sequence (``series'') of random times,  
rather than
a sequence of values of a random variable observed at a irregularly  
spaced times.
Hence I would say that point process modelling, rather than time  
series modelling,

would be more appropriate.

You could consider yourself to have two related point processes ---  
the process of
starting times and the process of stopping times.  Or you could  
consider the process,
of starting times only, as a marked point process, with the marks  
being the interval

lengths (i.e. tj_2 - tj_1).

How you would go about analyzing such data, I don't know.  In the  
point process context
``regularity'' would amount to having a homogeneous Poisson process  
(with the marks,
i.e. the interval lengths, being independent of the [starting]  
points).  There may be
tests for this sort of null hypothesis, against an unspecified  
alternative, out there.
I would start by having a look at the book (2 volumes) by Daley and  
Vere-Jones (second ed.).


One way to proceed might be to fit some sort of conditional intensity  
function (conditional
on the past, including the past marks) and test this model against  
the null model by
a likelihood ratio test.  The problem, it seems to me, is to specify  
an appropriate
and sufficiently alternative general conditional intensity function.   
The fitting could then be
done using the Berman-Turner device (see ``Approximating point  
process likelihoods using GLIM'',
M. Berman and T. R. Turner, Applied Statistics vol. 41, 1992, pp. 31  
-- 38.  See also

the paper by Ogata cited therein.)

HTH.

cheers,

Rolf Turner

On 4/09/2008, at 4:44 PM, Alexy Khrabrov wrote:


Greetings -- I've got some sensor data of the form

t1_1, t1_2
t2_1, t2_2
...
tN_1,tN_2

-- time intervals measuring starts and stops of sensor activity.   
I'd like to see whether there's any regularity in it.  Seems  
natural to consider these data timeseries -- except most of the  
timeseries packages and models assume regular ones, with a fixed  
frequency. 
I wonder what's a good way to apply existing regular timeseries  
packages to these data, and perhaps try some others?  I like David  
Stoffer's book a lot, yet he uses R's own ts methods (with some  
extras).  I also like the zoo package, which allows for irregular  
timeseries, yet I'm not sure how to apply the "usual" models to zoo  
objects -- even though zoo strives to be compatible with ts...  Is  
zoo directly usable for ts-like time domain and spectral analysis  
as per Stoffer?


Another way I was pondering is to map the above to a an artificial  
index 1:n and consider it multivariate timeseries.  Is it something  
done in irregular timeseries analysis?


Cheers,
Alexy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting- 
guide.html

and provide commented, minimal, self-contained, reproducible code.



##
Attention:\ This e-mail message is privileged and confid...{{dropped:9}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using complete.cases() with nested factors

2008-09-04 Thread Gabor Grothendieck
See ?subset and ?ave and try this:

subset(DF, ave(year, year, FUN = length) == 12)


On Thu, Sep 4, 2008 at 5:04 PM, Andrew Barr <[EMAIL PROTECTED]> wrote:
> Hello,
>
> This maybe a newbie question.  I have a dataframe that looks like the sample
> at the bottom of the email.  I have monthly precipitation data from several
> sites over several years.  For each site, I need to extract years that have
> a complete series of 12 monthly precipitation values, while excluding that
> year for sites with incomplete data.  I can't figure out how to do this
> gracefully (i.e. without a silly for loop).  Any help will be appreciate,
> thanks!
>
> Andrew
>
> SiteIDyearmonthprecip(mm)
> 6700901941jan2998
> 6700901941feb1299
> 6700901941mar1007
> 6700901941apr354
> 6700901941may88
> 6700901941jun156
> 6700901941jul8
> 6700901941aug4
> 6700901941sep8
> 6700901941oct58
> 6700901941nov397
> 6700901941dec248
> 6700901942janNA
> 6700901942feb380
> 6700901942mar797
> 6700901942apr142
> 6700901942may43
> 6700901942jun14
> 6700901942jul70
> 6700901942aug51
> 6700901942sep0
> 6700901942oct10
> 6700901942nov235
> 6700901942dec405
>
> --
> Andrew
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using complete.cases() with nested factors

2008-09-04 Thread hadley wickham
On Thu, Sep 4, 2008 at 4:19 PM, Ken Knoblauch <[EMAIL PROTECTED]> wrote:
> Andrew Barr  gmail.com> writes:
>> This maybe a newbie question.  I have a dataframe
> that looks like the sample
>> at the bottom of the email.  I have monthly
> precipitation data from several
>> sites over several years.  For each site,
> I need to extract years that have
>> a complete series of 12 monthly precipitation
> values, while excluding that
>> year for sites with incomplete data.
> I can't figure out how to do this
>> gracefully (i.e. without a silly for loop).
> Any help will be appreciate,
>> thanks!
>> SiteIDyearmonthprecip(mm)
>> 6700901941jan2998
>> 6700901941feb1299
>> 6700901941mar1007
>> 6700901941apr354
>> 6700901941may88
>> 6700901941jun156
>> 6700901941jul8
>> 6700901941aug4
>> 6700901941sep8
>> 6700901941oct58
>> 6700901941nov397
>> 6700901941dec248
>> 6700901942janNA
>> 6700901942feb380
>> 6700901942mar797
>> 6700901942apr142
>> 6700901942may43
>> 6700901942jun14
>> 6700901942jul70
>> 6700901942aug51
>> 6700901942sep0
>> 6700901942oct10
>> 6700901942nov235
>> 6700901942dec405
>>
> There are likely more elegant solutions but this seems to work.
> If the data frame is in a variable named dd
>
> lapply(unique(dd$year), function(x) {s <- subset(dd, year == x)
>  if (nrow(s) == 12) s})

I think this is slightly more elegant, and follows the
split-apply-combine strategy:

years <- split(dd, dd$year)
full_years <- Filter(function(df) nrow(df) == 12, years)
do.call("cbind", full_years)

Hadley

-- 
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Building a time series.

2008-09-04 Thread Achim Zeileis

On Thu, 4 Sep 2008, [EMAIL PROTECTED] wrote:


I have a need to build a time series and there are a couple of aspects about 
the time series object that are confusing me. First it seems that ts.union is 
not doing what I would expect. For example:

x0 <- rep(0,10)
x1 <- rep(1,10)

xt0 <- ts(x0, frequency=10)
xt1 <- ts(x1, frequency=10)

st2 <- ts.union(xt0, xt1)

xt2

Time Series:
Start = c(1, 1)
End = c(1, 10)
Frequency = 10
   xt0 xt1
1.0   0   1
1.1   0   1
1.2   0   1
1.3   0   1
1.4   0   1
1.5   0   1
1.6   0   1
1.7   0   1
1.8   0   1
1.9   0   1


spectrum(xt2)

Error in plot.window(...) : need finite 'ylim' values
In addition: Warning messages:
1: In xy.coords(x, y, xlabel, ylabel, log = log) :
 10 y values <= 0 omitted from logarithmic plot
2: In min(x) : no non-missing arguments to min; returning Inf
3: In max(x) : no non-missing arguments to max; returning -Inf
4: In xy.coords(x, y, xlabel, ylabel, log) :
 5 y values <= 0 omitted from logarithmic plot

I would expect that ts.union would concatenate the time series.


You expect wrong. ts.union() is more like cbind() for all combined time 
indexes, ts.intersect() for the intersection of time indexes.



I would expect xt2 from above to be a time series from 1:20. If I do


This would never make sense. You have defined both xt0 and xt1 to be time 
series starting at 1(1) and ending at 1(10). Why should xt1 be shifted to 
start at 2(1) and end at 2(10)?



xt2 <- c(xt0, xt1)


xt2

[1] 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1




This seems to get rid of the time series nature of each of the objects.

I can do:


xt2 <- ts(c(xt0,xt1), frequency=10)
xt2

Time Series:
Start = c(1, 1)
End = c(2, 10)
Frequency = 10
[1] 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1




But whst am I to interpret the start and end (c(1,1), and c(2,10)) to be?

Is this the best way to concatenate two time series?


Package "zoo" has a c() command for zoo series. For your example this 
complains


R> c(as.zoo(xt0), as.zoo(xt1))
Error in rbind.zoo(...) : indexes overlap

But if a concatenation makes sense, this works

R> xt1 <- ts(x1, start = 2, frequency=10)

R> c(as.zoo(xt0), as.zoo(xt1))
 1(1)  1(2)  1(3)  1(4)  1(5)  1(6)  1(7)  1(8)  1(9) 1(10)  2(1)  2(2)  2(3)
0 0 0 0 0 0 0 0 0 0 1 1 1
 2(4)  2(5)  2(6)  2(7)  2(8)  2(9) 2(10)
1 1 1 1 1 1 1

R> as.ts(c(as.zoo(xt0), as.zoo(xt1)))
Time Series:
Start = c(1, 1)
End = c(2, 10)
Frequency = 10
 [1] 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

hth,
Z


Thank you.

Kevin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting the complex fft in 3D?

2008-09-04 Thread Oliver Bandel
Zitat von Duncan Murdoch <[EMAIL PROTECTED]>:

> On 04/09/2008 4:44 PM, Oliver Bandel wrote:
> > Zitat von Duncan Murdoch <[EMAIL PROTECTED]>:
> >
> >> Oliver Bandel wrote:
> >>> Hello,
> >>>
> >>>
[...]


> plot3d doesn't support that directly, but you could plot with
> type='n',
> then use segments3d to add the lines.
>
> >
> > BTW: how to change the perspective? I did not found an
> > angle-parameter for the plot3d()-function.
>
> Just grab it with your mouse and drag.

Wow, coool! :-)

Well, rgl I think "gl" stands for OpenGl.

Fine. :-)


>  Alternatively,
> play3d(spin3d())
> will spin it, or par3d(userMatrix=rotationMatrix(...)) for a fixed
> setting.

Ok, some thinsg to play with.

Thank you.


Ciao,
   Oliver

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] showing the image

2008-09-04 Thread Greg Snow
Here is one approach:

> tmp <- scan(what='')
1:  "#44" "#44" "#434343" "#404040" "#3D3D3D" "#3D3D3D" "#3E3E3E"
8:  "#414141" "#414141" "#414141" "#404040" "#3F3F3F" "#3F3F3F" "#3F3F3F"
15:  "#3E3E3E" "#3E3E3E" "#3F3F3F" "#404040" "#404040" "#404040" "#404040"
22:  "#3E3E3E" "#3D3D3D" "#3E3E3E" "#404040" "#414141" "#414141" "#404040"
29:  "#3E3E3E" "#3E3E3E" "#3F3F3F" "#3F3F3F" "#3F3F3F" "#3F3F3F" "#3E3E3E"
36:  "#3E3E3E" "#3F3F3F" "#3E3E3E" "#3D3D3D" "#3B3B3B" "#3C3C3C" "#3B3B3B"
43:
Read 42 items
> tmp <- factor(tmp)
>
> tmp2 <- matrix(as.numeric(tmp), nrow=6, byrow=TRUE)
>
> image(tmp2, col=levels(tmp))
>

But the image is pretty gray.

Hope this helps,

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
(801) 408-8111



> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of rostam shahname
> Sent: Thursday, September 04, 2008 2:28 PM
> To: r-help@r-project.org
> Subject: [R] showing the image
>
> Hi everyone,
>
> I have a matrix containing color values of an image same as following,
>
> [1,] "#44" "#44" "#434343" "#404040" "#3D3D3D"
> "#3D3D3D" "#3E3E3E"
> [2,] "#414141" "#414141" "#414141" "#404040" "#3F3F3F"
> "#3F3F3F" "#3F3F3F"
> [3,] "#3E3E3E" "#3E3E3E" "#3F3F3F" "#404040" "#404040"
> "#404040" "#404040"
> [4,] "#3E3E3E" "#3D3D3D" "#3E3E3E" "#404040" "#414141"
> "#414141" "#404040"
> [5,] "#3E3E3E" "#3E3E3E" "#3F3F3F" "#3F3F3F" "#3F3F3F"
> "#3F3F3F" "#3E3E3E"
> [6,] "#3E3E3E" "#3F3F3F" "#3E3E3E" "#3D3D3D" "#3B3B3B"
> "#3C3C3C" "#3B3B3B"
>
>
> I wonder if there is any function that helps me to show the image.
>
> Thanks,
> Rostam
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using complete.cases() with nested factors

2008-09-04 Thread Ken Knoblauch
Andrew Barr  gmail.com> writes:
> This maybe a newbie question.  I have a dataframe 
that looks like the sample
> at the bottom of the email.  I have monthly 
precipitation data from several
> sites over several years.  For each site, 
I need to extract years that have
> a complete series of 12 monthly precipitation 
values, while excluding that
> year for sites with incomplete data.  
I can't figure out how to do this
> gracefully (i.e. without a silly for loop).  
Any help will be appreciate,
> thanks!
> SiteIDyearmonthprecip(mm)
> 6700901941jan2998
> 6700901941feb1299
> 6700901941mar1007
> 6700901941apr354
> 6700901941may88
> 6700901941jun156
> 6700901941jul8
> 6700901941aug4
> 6700901941sep8
> 6700901941oct58
> 6700901941nov397
> 6700901941dec248
> 6700901942janNA
> 6700901942feb380
> 6700901942mar797
> 6700901942apr142
> 6700901942may43
> 6700901942jun14
> 6700901942jul70
> 6700901942aug51
> 6700901942sep0
> 6700901942oct10
> 6700901942nov235
> 6700901942dec405
> 
There are likely more elegant solutions but this seems to work.
If the data frame is in a variable named dd

lapply(unique(dd$year), function(x) {s <- subset(dd, year == x)
  if (nrow(s) == 12) s})

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Building a time series.

2008-09-04 Thread rkevinburton
I have a need to build a time series and there are a couple of aspects about 
the time series object that are confusing me. First it seems that ts.union is 
not doing what I would expect. For example:

x0 <- rep(0,10)
x1 <- rep(1,10)

xt0 <- ts(x0, frequency=10)
xt1 <- ts(x1, frequency=10)

st2 <- ts.union(xt0, xt1)
> xt2
Time Series:
Start = c(1, 1) 
End = c(1, 10) 
Frequency = 10 
xt0 xt1
1.0   0   1
1.1   0   1
1.2   0   1
1.3   0   1
1.4   0   1
1.5   0   1
1.6   0   1
1.7   0   1
1.8   0   1
1.9   0   1

> spectrum(xt2)
Error in plot.window(...) : need finite 'ylim' values
In addition: Warning messages:
1: In xy.coords(x, y, xlabel, ylabel, log = log) :
  10 y values <= 0 omitted from logarithmic plot
2: In min(x) : no non-missing arguments to min; returning Inf
3: In max(x) : no non-missing arguments to max; returning -Inf
4: In xy.coords(x, y, xlabel, ylabel, log) :
  5 y values <= 0 omitted from logarithmic plot

I would expect that ts.union would concatenate the time series. I would expect 
xt2 from above to be a time series from 1:20. If I do

xt2 <- c(xt0, xt1)

> xt2
 [1] 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
> 

This seems to get rid of the time series nature of each of the objects.

I can do:

> xt2 <- ts(c(xt0,xt1), frequency=10)
> xt2
Time Series:
Start = c(1, 1) 
End = c(2, 10) 
Frequency = 10 
 [1] 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
> 

But whst am I to interpret the start and end (c(1,1), and c(2,10)) to be?

Is this the best way to concatenate two time series?

Thank you.

Kevin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting the complex fft in 3D?

2008-09-04 Thread Duncan Murdoch

On 04/09/2008 4:44 PM, Oliver Bandel wrote:

Zitat von Duncan Murdoch <[EMAIL PROTECTED]>:


Oliver Bandel wrote:

Hello,


I'm new to R (using it since about two weeks),
but absolutely a fan of it from the beginning on. :-)

Best tool for working with data I found. :-)

I tried using the fft() and other funcitons for
analysing time series.

What I would be glad to have, would be a
convenient way to display the complex result
of a fft in a way, that real and imaginary parts
each use an axis for themselves, and the index of the
resulting values use the third axe.

When displaying this as a 3D->2D picture,
it also would be nice, to change the view,
like it can be done with persp().

Is there already a package or script for preparing the data
of an fft to be displayed in this way?

I don't find this very enlightening, but here you go:

x <- rnorm(1000)
f <- fft(x)

library(rgl)
plot3d(1:length(f), Re(f), Im(f))

[...]

Ok, this is a starting point. :-)

It would be enlightening, if you have a timeseries that is not
noise only, and if the plot would not use dots.
So, using a timeseries that is derived from some  data
could be very interesting. But rnorm creates noise only,
and not a deterministic signal. So the resulting fft values
look quite boring ;-)

Instead of dots, a line from the index-axis at Re=0, Im=0
to the value of the fft at that index should be drawn.


plot3d doesn't support that directly, but you could plot with type='n', 
then use segments3d to add the lines.




BTW: how to change the perspective? I did not found an
angle-parameter for the plot3d()-function.


Just grab it with your mouse and drag.  Alternatively, play3d(spin3d()) 
will spin it, or par3d(userMatrix=rotationMatrix(...)) for a fixed setting.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] using complete.cases() with nested factors

2008-09-04 Thread Andrew Barr
Hello,

This maybe a newbie question.  I have a dataframe that looks like the sample
at the bottom of the email.  I have monthly precipitation data from several
sites over several years.  For each site, I need to extract years that have
a complete series of 12 monthly precipitation values, while excluding that
year for sites with incomplete data.  I can't figure out how to do this
gracefully (i.e. without a silly for loop).  Any help will be appreciate,
thanks!

Andrew

SiteIDyearmonthprecip(mm)
6700901941jan2998
6700901941feb1299
6700901941mar1007
6700901941apr354
6700901941may88
6700901941jun156
6700901941jul8
6700901941aug4
6700901941sep8
6700901941oct58
6700901941nov397
6700901941dec248
6700901942janNA
6700901942feb380
6700901942mar797
6700901942apr142
6700901942may43
6700901942jun14
6700901942jul70
6700901942aug51
6700901942sep0
6700901942oct10
6700901942nov235
6700901942dec405

-- 
Andrew

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] isoMDS and dist

2008-09-04 Thread Carolyn A. Pucko


I am starting with a matrix in which rows are vegetation plots and  
columns are various characteristics including ID# and elevation. I  
removed elevation and ID columns to avoid having those characteristics  
influence the distances between points which I calculated using the  
"dist" command. The resulting distance file was then used in isoMDS.


What I want to know is whether I can reattach the ID and elevation  
onto the point location information given from isoMDS? or is it  
possible that the order given in the isoMDS output is not the same as  
the order of the original data used to calculate distances?


Thanks,
Carrie Pucko
University of Vermont

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting the complex fft in 3D?

2008-09-04 Thread Oliver Bandel
Zitat von Duncan Murdoch <[EMAIL PROTECTED]>:

> Oliver Bandel wrote:
> > Hello,
> >
> >
> > I'm new to R (using it since about two weeks),
> > but absolutely a fan of it from the beginning on. :-)
> >
> > Best tool for working with data I found. :-)
> >
> > I tried using the fft() and other funcitons for
> > analysing time series.
> >
> > What I would be glad to have, would be a
> > convenient way to display the complex result
> > of a fft in a way, that real and imaginary parts
> > each use an axis for themselves, and the index of the
> > resulting values use the third axe.
> >
> > When displaying this as a 3D->2D picture,
> > it also would be nice, to change the view,
> > like it can be done with persp().
> >
> > Is there already a package or script for preparing the data
> > of an fft to be displayed in this way?
>
> I don't find this very enlightening, but here you go:
>
> x <- rnorm(1000)
> f <- fft(x)
>
> library(rgl)
> plot3d(1:length(f), Re(f), Im(f))
[...]

Ok, this is a starting point. :-)

It would be enlightening, if you have a timeseries that is not
noise only, and if the plot would not use dots.
So, using a timeseries that is derived from some  data
could be very interesting. But rnorm creates noise only,
and not a deterministic signal. So the resulting fft values
look quite boring ;-)

Instead of dots, a line from the index-axis at Re=0, Im=0
to the value of the fft at that index should be drawn.

BTW: how to change the perspective? I did not found an
angle-parameter for the plot3d()-function.

Ciao,
   Oliver

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Possible R graphics devices

2008-09-04 Thread stephen sefick
I think being able to say lable your points with a third column of
data would be cool (which I am sure is already do-able), and also to
be able to hover over a point and find out the value.  On a huge
wishlist that I don't have a clue how to implement- a zoom window
function close to kaliedagraphs would make me very happy.

On Thu, Sep 4, 2008 at 2:41 PM, Jeffrey Horner
<[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I've been working on a new R Graphics device that targets Adobe's Shockwave
> Flash format (SWF for short). It uses http://www.libming.org/ on the
> backend. Here are some example outputs so far:
>
> http://160.129.129.41/~hornerj/plots/
>
> Once you click on the above, choose a directory like 'smooth' and then click
> on test.html to see a side-by-side comparison of the swf file with the png
> equivalent. Fonts aren't working yet (but they will), and libming doesn't
> support dashed lines either, so there are some areas upon which to improve.
>
> My goal is to support limited interactivity in the form of popup labels for
> key data points. Maybe someone has some other ideas?
>
> Also, there's a fascinating new project that aims to create interactive
> graphics within the HTML canvas tag using the Processing.js javascript
> library. It's quite eye-opening:
>
> http://ejohn.org/blog/processingjs
>
> I believe an R graphics device could target javascript as well!
>
> Thoughts?
>
> Jeff
> --
> http://biostat.mc.vanderbilt.edu/JeffreyHorner
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Stephen Sefick
Research Scientist
Southeastern Natural Sciences Academy

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] showing the image

2008-09-04 Thread rostam shahname
Hi everyone,

I have a matrix containing color values of an image same as following,

[1,] "#44" "#44" "#434343" "#404040" "#3D3D3D" "#3D3D3D" "#3E3E3E"
[2,] "#414141" "#414141" "#414141" "#404040" "#3F3F3F" "#3F3F3F" "#3F3F3F"
[3,] "#3E3E3E" "#3E3E3E" "#3F3F3F" "#404040" "#404040" "#404040" "#404040"
[4,] "#3E3E3E" "#3D3D3D" "#3E3E3E" "#404040" "#414141" "#414141" "#404040"
[5,] "#3E3E3E" "#3E3E3E" "#3F3F3F" "#3F3F3F" "#3F3F3F" "#3F3F3F" "#3E3E3E"
[6,] "#3E3E3E" "#3F3F3F" "#3E3E3E" "#3D3D3D" "#3B3B3B" "#3C3C3C" "#3B3B3B"


I wonder if there is any function that helps me to show the image.

Thanks,
Rostam

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] (with subject)

2008-09-04 Thread Felipe Carrillo
Is this what you want?

 library(ggplot2)
rawData <- data.frame(Date = c("2008-03-01", "2008-03-21", "2008-03-23",
"2008-04-08", "2008-04-20", "2008-05-10", "2008-06-20"), y = c(4, 6,8, 5, 7, 2 
,1))
 rawData$Date <- as.Date(rawData$Date)
 
qplot(Date,y,data=rawData,geom="line",xlim=c(as.Date("2008-1-1"),as.Date("2008-12-1")))

--- On Thu, 9/4/08, Dr Eberhard W Lisse <[EMAIL PROTECTED]> wrote:

> From: Dr Eberhard W Lisse <[EMAIL PROTECTED]>
> Subject: Re: [R] (with subject)
> To: r-help@r-project.org
> Cc: "Dr Eberhard W Lisse" <[EMAIL PROTECTED]>
> Date: Thursday, September 4, 2008, 12:15 PM
> Dirk, Gabor,
> 
> thanks for your advice, I have now tried to study these
> vignettes, but I must say, as an aging Gynecologist,
> I am facing an enormous learning curve :-)-O.
> 
> This works nicely:
> 
> rawData <- data.frame(x = c("2008-03-01",
> "2008-03-21", "2008-03-23",  
> "2008-04-08", "2008-04-20",
> "2008-05-10", "2008-06-20"), y = c(4, 6,
>  
> 8, 5, 7, 2 ,1))
> rawData[,"x"] <-
> as.Date(rawData[,"x"])
> with(rawData, plot(x, y, main="Some text",
> type='l'))
> 
> How do I make the X-Axis start on 2008-01-01, end on
> 2008-12-31,
> with tick marks for every month (ie month.abb[1:12] or
> something
> similar)?
> 
> A pointer is enough, weekend is coming up and I am off
> :-)-O
> 
> I also need to add 2 lines statements, but I reckon I can
> figure that  
> out
> by myself :-)-O
> 
> 
> greetings, el
> --
> Dr. Eberhard W. Lisse  \/ Obstetrician &
> Gynaecologist (Saar)
> [EMAIL PROTECTED] el108-ARIN /   *   |   Telephone: +264 81 124
> 6733 (cell)
> PO Box 8421 \ / Please send DNS/NA-NiC
> related e-mail
> Bachbrecht, Namibia ;/ to
> [EMAIL PROTECTED]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained,
> reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding NA to "subset"

2008-09-04 Thread Sherri Heck

Fantastic!  Each suggestion worked beautifully.  Thanks so much.

Henrique Dallazuanna wrote:

Try this:


is.na (spdco2[,2]) <- which(spdco2[,2] < 4.7)
spdco2

On Thu, Sep 4, 2008 at 4:21 PM, Sherri Heck <[EMAIL PROTECTED] 
> wrote:


Hi All-

I have a data set (spdco2)

> spdco2
[,1] [,2] [,3]
[1,]1  5.4 382.4212
[2,]2  5.1 383.0315
[3,]3  4.8 383.9520
[4,]4  4.7 384.4376
[5,]5  4.7 384.5929
[6,]6  4.4 384.8864
[7,]7  4.1 385.2156
[8,]8  3.8 385.2919
[9,]9  3.7 385.5925
[10,]   10  3.9 385.6801


I am subsetting it to output when [,2] is >= 4.7 .
x<-subset(spdco2,spdco2[,2]>=4.7)

This works, but I would like to add 'NA' to the data that it does
exclude based on my subset criteria.  I have searched through the
archives and two R books, but I cannot figure out how to add the NA.
Any suggestions would be appreciated-

thanks!

sherri





x<-subset(spdco2,spdco2[,2]>=4.7)

__
R-help@r-project.org  mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding NA to "subset"

2008-09-04 Thread Henrique Dallazuanna
Try this:


is.na(spdco2[,2]) <- which(spdco2[,2] < 4.7)
spdco2

On Thu, Sep 4, 2008 at 4:21 PM, Sherri Heck <[EMAIL PROTECTED]> wrote:

> Hi All-
>
> I have a data set (spdco2)
>
> > spdco2
> [,1] [,2] [,3]
> [1,]1  5.4 382.4212
> [2,]2  5.1 383.0315
> [3,]3  4.8 383.9520
> [4,]4  4.7 384.4376
> [5,]5  4.7 384.5929
> [6,]6  4.4 384.8864
> [7,]7  4.1 385.2156
> [8,]8  3.8 385.2919
> [9,]9  3.7 385.5925
> [10,]   10  3.9 385.6801
>
>
> I am subsetting it to output when [,2] is >= 4.7 .
> x<-subset(spdco2,spdco2[,2]>=4.7)
>
> This works, but I would like to add 'NA' to the data that it does exclude
> based on my subset criteria.  I have searched through the archives and two R
> books, but I cannot figure out how to add the NA.
> Any suggestions would be appreciated-
>
> thanks!
>
> sherri
>
>
>
>
>
> x<-subset(spdco2,spdco2[,2]>=4.7)
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding NA to "subset"

2008-09-04 Thread Duncan Murdoch

Sherri Heck wrote:

Hi All-

I have a data set (spdco2)

 > spdco2
  [,1] [,2] [,3]
 [1,]1  5.4 382.4212
 [2,]2  5.1 383.0315
 [3,]3  4.8 383.9520
 [4,]4  4.7 384.4376
 [5,]5  4.7 384.5929
 [6,]6  4.4 384.8864
 [7,]7  4.1 385.2156
 [8,]8  3.8 385.2919
 [9,]9  3.7 385.5925
[10,]   10  3.9 385.6801


I am subsetting it to output when [,2] is >= 4.7 . 


x<-subset(spdco2,spdco2[,2]>=4.7)

This works, but I would like to add 'NA' to the data that it does 
exclude based on my subset criteria.  I have searched through the 
archives and two R books, but I cannot figure out how to add the NA.

Any suggestions would be appreciated-

Use is.na():

x <- subset(spdco2, spdco2[,2] >= 4.7 | is.na(spdco2[,2]))

Make sure that the inclusion or exclusion of the row can be computed:  
TRUE | NA is TRUE, but FALSE | NA is NA.


Duncan Murdoch

Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding NA to "subset"

2008-09-04 Thread Jorge Ivan Velez
Dear Sherri,
Perhaps:

spdco2=matrix(scan(),ncol=3,byrow=TRUE)
  1  5.4 382.4212
  2  5.1 383.0315
  3  4.8 383.9520
  4  4.7 384.4376
  5  4.7 384.5929
  6  4.4 384.8864
  7  4.1 385.2156
  8  3.8 385.2919
  9  3.7 385.5925
 10  3.9 385.6801

spdco2[,2]<-ifelse(spdco2[,2]>=4.7,spdco2[,2],NA)
spdco2


HTH,

Jorge


On Thu, Sep 4, 2008 at 3:21 PM, Sherri Heck <[EMAIL PROTECTED]> wrote:

> Hi All-
>
> I have a data set (spdco2)
>
> > spdco2
> [,1] [,2] [,3]
> [1,]1  5.4 382.4212
> [2,]2  5.1 383.0315
> [3,]3  4.8 383.9520
> [4,]4  4.7 384.4376
> [5,]5  4.7 384.5929
> [6,]6  4.4 384.8864
> [7,]7  4.1 385.2156
> [8,]8  3.8 385.2919
> [9,]9  3.7 385.5925
> [10,]   10  3.9 385.6801
>
>
> I am subsetting it to output when [,2] is >= 4.7 .
> x<-subset(spdco2,spdco2[,2]>=4.7)
>
> This works, but I would like to add 'NA' to the data that it does exclude
> based on my subset criteria.  I have searched through the archives and two R
> books, but I cannot figure out how to add the NA.
> Any suggestions would be appreciated-
>
> thanks!
>
> sherri
>
>
>
>
>
> x<-subset(spdco2,spdco2[,2]>=4.7)
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [R-SIG-Mac] Error with update.packages()

2008-09-04 Thread Brant Inman
Thank you for your suggestion.  I did as you suggested and switched  
from USA (NC) to USA (IA).


Again, the packages seemed to download OK but this time a single error  
message appeared:


--

..

==
downloaded 168 Kb

trying URL 
'http://streaming.stat.iastate.edu/CRAN/bin/macosx/universal/contrib/2.7/zoo_1.5-4.tgz'
Content type 'application/x-gzip' length 749912 bytes (732 Kb)
opened URL
==
downloaded 732 Kb

Error: cannot remove prior installation of package ‘fSeries’
>


--

This error appears to have stopped the package updating, since when I  
type update.packages() again, the whole list of packages is generated,  
implying that they have not been updated.


Any other suggestions?  Would uninstalling R and then reinstalling it  
help?



Brant

-


On Sep 4, 2008, at 2:59 PM, Prof Brian Ripley wrote:

Try another mirror: this is not the first time we've seen problems  
at www.ibiblio.org.


On Thu, 4 Sep 2008, Brant Inman wrote:


R-helpers:

I just updated from R 7.0 to R 7.2.2 today.  I am using MAC OS X
version 10.5.4 on a Macbook to run R.



> sessionInfo()
R version 2.7.2 (2008-08-25)
i386-apple-darwin8.11.1

locale:
en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_2.7.2



When I tried to update my packages, I got the following messages:



> update.packages()
Amelia :
Version 1.1-30 installed in /Library/Frameworks/R.framework/
Resources/library
Version 1.1-33 available at http://www.ibiblio.org/pub/languages/R/
CRAN
Update (y/N/c)?  y
empiricalBayes :
Version 2.0 installed in /Library/Frameworks/R.framework/Resources/
library
Version 2.1 available at http://www.ibiblio.org/pub/languages/R/CRAN
Update (y/N/c)?  y
Matrix :
Version 0.999375-9 installed in /Library/Frameworks/R.framework/
Resources/library
Version 0.999375-11 available at http://www.ibiblio.org/pub/languages/R/CRAN
Update (y/N/c)?  y

...

also installing the dependencies ?sandwich?, ?strucchange?, ?vcd?,
?rrcov?, ?spam?

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/sandwich_2.1-0.tgz'
Content type 'application/x-gzip' length 678518 bytes (662 Kb)
opened URL
==
downloaded 662 Kb

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/strucchange_1.3-3.tgz'
Content type 'application/x-gzip' length 777150 bytes (758 Kb)
opened URL
==
downloaded 758 Kb

...

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/rrcov_0.4-06.tgz'
Content type 'application/x-gzip' length 365965 bytes (357 Kb)
opened URL
==
downloaded 304 Kb

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/spam_0.15-0.tgz'
Error in download.file(url, destfile, method, mode = "wb", ...) :
 cannot open URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/spam_0.15-0.tgz'
In addition: Warning messages:
1: In download.file(url, destfile, method, mode = "wb", ...) :
 downloaded length 312279 != reported length 365965
2: In download.file(url, destfile, method, mode = "wb", ...) :
 unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =
available,  :
 download of package 'spam' failed
trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/Amelia_1.1-33.tgz'
Error in download.file(url, destfile, method, mode = "wb", ...) :
 cannot open URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/Amelia_1.1-33.tgz'
In addition: Warning message:
In download.file(url, destfile, method, mode = "wb", ...) :
 unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =
available,  :
 download of package 'Amelia' failed

...

In addition: Warning message:
In download.file(url, destfile, method, mode = "wb", ...) :
 unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =
available,  :
 download of package 'zoo' failed

gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
Error in sprintf(gettext(fmt, domain = domain), ...) :
 argument is missing, with no default



The first part of the procedure seemed to work, then came page after
page of error messages.  Can someone tell me what is going on here  
and

how I can fix the problem?  I found a similar report 

[R] adding NA to "subset"

2008-09-04 Thread Sherri Heck

Hi All-

I have a data set (spdco2)

> spdco2
 [,1] [,2] [,3]
[1,]1  5.4 382.4212
[2,]2  5.1 383.0315
[3,]3  4.8 383.9520
[4,]4  4.7 384.4376
[5,]5  4.7 384.5929
[6,]6  4.4 384.8864
[7,]7  4.1 385.2156
[8,]8  3.8 385.2919
[9,]9  3.7 385.5925
[10,]   10  3.9 385.6801


I am subsetting it to output when [,2] is >= 4.7 . 


x<-subset(spdco2,spdco2[,2]>=4.7)

This works, but I would like to add 'NA' to the data that it does 
exclude based on my subset criteria.  I have searched through the 
archives and two R books, but I cannot figure out how to add the NA.

Any suggestions would be appreciated-

thanks!

sherri





x<-subset(spdco2,spdco2[,2]>=4.7)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting using 'if' statements

2008-09-04 Thread Steve Murray

Thanks again Henrique - that's really useful to know!

_
Discover Bird's Eye View now with Multimap from Live Search

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting the complex fft in 3D?

2008-09-04 Thread Duncan Murdoch

Oliver Bandel wrote:

Hello,


I'm new to R (using it since about two weeks),
but absolutely a fan of it from the beginning on. :-)

Best tool for working with data I found. :-)

I tried using the fft() and other funcitons for
analysing time series.

What I would be glad to have, would be a
convenient way to display the complex result
of a fft in a way, that real and imaginary parts
each use an axis for themselves, and the index of the
resulting values use the third axe.

When displaying this as a 3D->2D picture,
it also would be nice, to change the view,
like it can be done with persp().

Is there already a package or script for preparing the data
of an fft to be displayed in this way?


I don't find this very enlightening, but here you go:

x <- rnorm(1000)
f <- fft(x)

library(rgl)
plot3d(1:length(f), Re(f), Im(f))


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] (with subject)

2008-09-04 Thread Dr Eberhard W Lisse

Dirk, Gabor,

thanks for your advice, I have now tried to study these
vignettes, but I must say, as an aging Gynecologist,
I am facing an enormous learning curve :-)-O.

This works nicely:

rawData <- data.frame(x = c("2008-03-01", "2008-03-21", "2008-03-23",  
"2008-04-08", "2008-04-20", "2008-05-10", "2008-06-20"), y = c(4, 6,  
8, 5, 7, 2 ,1))

rawData[,"x"] <- as.Date(rawData[,"x"])
with(rawData, plot(x, y, main="Some text", type='l'))

How do I make the X-Axis start on 2008-01-01, end on 2008-12-31,
with tick marks for every month (ie month.abb[1:12] or something
similar)?

A pointer is enough, weekend is coming up and I am off :-)-O

I also need to add 2 lines statements, but I reckon I can figure that  
out

by myself :-)-O


greetings, el
--
Dr. Eberhard W. Lisse  \/ Obstetrician & Gynaecologist (Saar)
[EMAIL PROTECTED] el108-ARIN /   *   |   Telephone: +264 81 124 6733 (cell)
PO Box 8421 \ / Please send DNS/NA-NiC related e-mail
Bachbrecht, Namibia ;/ to [EMAIL PROTECTED]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [R-SIG-Mac] Error with update.packages()

2008-09-04 Thread Prof Brian Ripley
Try another mirror: this is not the first time we've seen problems at 
www.ibiblio.org.


On Thu, 4 Sep 2008, Brant Inman wrote:


R-helpers:

I just updated from R 7.0 to R 7.2.2 today.  I am using MAC OS X
version 10.5.4 on a Macbook to run R.



> sessionInfo()
R version 2.7.2 (2008-08-25)
i386-apple-darwin8.11.1

locale:
en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_2.7.2



When I tried to update my packages, I got the following messages:



> update.packages()
Amelia :
 Version 1.1-30 installed in /Library/Frameworks/R.framework/
Resources/library
 Version 1.1-33 available at http://www.ibiblio.org/pub/languages/R/
CRAN
Update (y/N/c)?  y
empiricalBayes :
 Version 2.0 installed in /Library/Frameworks/R.framework/Resources/
library
 Version 2.1 available at http://www.ibiblio.org/pub/languages/R/CRAN
Update (y/N/c)?  y
Matrix :
 Version 0.999375-9 installed in /Library/Frameworks/R.framework/
Resources/library
 Version 0.999375-11 available at http://www.ibiblio.org/pub/languages/R/CRAN
Update (y/N/c)?  y

...

also installing the dependencies ?sandwich?, ?strucchange?, ?vcd?,
?rrcov?, ?spam?

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/sandwich_2.1-0.tgz'
Content type 'application/x-gzip' length 678518 bytes (662 Kb)
opened URL
==
downloaded 662 Kb

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/strucchange_1.3-3.tgz'
Content type 'application/x-gzip' length 777150 bytes (758 Kb)
opened URL
==
downloaded 758 Kb

...

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/rrcov_0.4-06.tgz'
Content type 'application/x-gzip' length 365965 bytes (357 Kb)
opened URL
==
downloaded 304 Kb

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/spam_0.15-0.tgz'
Error in download.file(url, destfile, method, mode = "wb", ...) :
  cannot open URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/spam_0.15-0.tgz'
In addition: Warning messages:
1: In download.file(url, destfile, method, mode = "wb", ...) :
  downloaded length 312279 != reported length 365965
2: In download.file(url, destfile, method, mode = "wb", ...) :
  unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =
available,  :
  download of package 'spam' failed
trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/Amelia_1.1-33.tgz'
Error in download.file(url, destfile, method, mode = "wb", ...) :
  cannot open URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/Amelia_1.1-33.tgz'
In addition: Warning message:
In download.file(url, destfile, method, mode = "wb", ...) :
  unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =
available,  :
  download of package 'Amelia' failed

...

In addition: Warning message:
In download.file(url, destfile, method, mode = "wb", ...) :
  unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =
available,  :
  download of package 'zoo' failed

gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
Error in sprintf(gettext(fmt, domain = domain), ...) :
  argument is missing, with no default



The first part of the procedure seemed to work, then came page after
page of error messages.  Can someone tell me what is going on here and
how I can fix the problem?  I found a similar report on R-help that
was never resolved and also involved a MAC 
(http://tolstoy.newcastle.edu.au/R/e4/help/08/06/15556.html
).  Could this be a MAC issue?

Thanks very much for your time.

Brant Inman
[[alternative HTML version deleted]]




--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plotting the complex fft in 3D?

2008-09-04 Thread Oliver Bandel
Hello,


I'm new to R (using it since about two weeks),
but absolutely a fan of it from the beginning on. :-)

Best tool for working with data I found. :-)

I tried using the fft() and other funcitons for
analysing time series.

What I would be glad to have, would be a
convenient way to display the complex result
of a fft in a way, that real and imaginary parts
each use an axis for themselves, and the index of the
resulting values use the third axe.

When displaying this as a 3D->2D picture,
it also would be nice, to change the view,
like it can be done with persp().

Is there already a package or script for preparing the data
of an fft to be displayed in this way?

Ciao,
   Oliver

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Possible R graphics devices

2008-09-04 Thread Jeffrey Horner

Hello all,

I've been working on a new R Graphics device that targets Adobe's 
Shockwave Flash format (SWF for short). It uses http://www.libming.org/ 
on the backend. Here are some example outputs so far:


http://160.129.129.41/~hornerj/plots/

Once you click on the above, choose a directory like 'smooth' and then 
click on test.html to see a side-by-side comparison of the swf file with 
the png equivalent. Fonts aren't working yet (but they will), and 
libming doesn't support dashed lines either, so there are some areas 
upon which to improve.


My goal is to support limited interactivity in the form of popup labels 
for key data points. Maybe someone has some other ideas?


Also, there's a fascinating new project that aims to create interactive 
graphics within the HTML canvas tag using the Processing.js javascript 
library. It's quite eye-opening:


http://ejohn.org/blog/processingjs

I believe an R graphics device could target javascript as well!

Thoughts?

Jeff
--
http://biostat.mc.vanderbilt.edu/JeffreyHorner

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] combning rows

2008-09-04 Thread jim holtman
Read your data in, parse off the ID, split the records (using 'split')
and then combine the like values.  Since you did not read the posting
guide, it is hard to give explicit help.


On Thu, Sep 4, 2008 at 1:42 PM, kayj <[EMAIL PROTECTED]> wrote:
>
> I do have a txt  file where each row is a record and the first element of
> each record is an id for an individual. I am looking into combing the
> records into one row if the id is the same and save as a txt file. Notice
> that the rows may not have the same length in the result file that I want to
> create .  Can I do this using R. I appreciate your help.
>
> Thanks
>
> --
> View this message in context: 
> http://www.nabble.com/combning-rows-tp19316079p19316079.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Error in table lookup

2008-09-04 Thread jim holtman
Your IF statement (if (x >= 3:6)) does not work as you probably expect
it since it does not test for the range.  You probably want to use
"cut" to convert a value to a bucket in a range:

> x <- 3.5
> cut(x, breaks=seq(-3,5,.2), labels=FALSE)
[1] 33
> seq(-3,5,.2)
 [1] -3.0 -2.8 -2.6 -2.4 -2.2 -2.0 -1.8 -1.6 -1.4 -1.2 -1.0 -0.8 -0.6
-0.4 -0.2  0.0  0.2
[18]  0.4  0.6  0.8  1.0  1.2  1.4  1.6  1.8  2.0  2.2  2.4  2.6  2.8
3.0  3.2  3.4  3.6
[35]  3.8  4.0  4.2  4.4  4.6  4.8  5.0
> # it is in the 33rd group
>

On Thu, Sep 4, 2008 at 1:52 PM, PDXRugger <[EMAIL PROTECTED]> wrote:
>
> I am trying to lookup a value in 1 of 10 loaded two column-data sets (Bins)
> by displaying the value of the second column based on the value of the
> first. For instance in
>
>  Bin1_Acres   Bin1_parprobBin1_TAZ
>  [1,]   0.0004420.978  356
>  [2,]   0.0004530.954  356
>  [3,]   0.0005830.925  366
>  [4,]   0.0006350.893  403
>  [5,]   0.0007560.854  358
>  [6,]   0.0007740.815  530
>  [7,]   0.0008130.773  405
>  [8,]   0.0009700.724  576
>  [9,]   0.0010220.672  569
> [10,]   0.0010660.618  620
>
> I would like to display the column value on Bin1_parprob based on the
> closest match to Bin1_acres.  So if the value i am referenceing in column 1
> (Bin1_Acres) is .00 the value outputted would be .672.   I keep
> getting a numeric(0) error with the code i am using(see below).  I think the
> issue is i usually dont have an exact match and i am using a >= sign which
> may be causing the problem.   I need the closest match and it needs to be
> larger, not smaller if not exact match is found( which is very common).
>
> #Test value for Vacant acres in TAZ
> TAZDetermine = 24
>
> #Test value for and development size
> Dev_Size= 3.5
>
> #Determines Bin number based on vacant acres in TAZ
> BinSize=function(Dev_Size,BinNumer){
> if (TAZDetermine<=3.999)
>BinNumber=1
> if (TAZDetermine>=4:6.999)
>(BinNumber=2)
> if (TAZDetermine >=10:16.999)
>(BinNumber=3)
> if (TAZDetermine>=17:27.999)
>(BinNumber=4)
> if (TAZDetermine>=28:49.999)
>(BinNumber=5)
> if (TAZDetermine>=50:90.999)
>(BinNumber=6)
> if (TAZDetermine>=91:150.999)
>(BinNumber=7)
> if (TAZDetermine>=151:340.999)
>(BinNumber=8)
> if (TAZDetermine>=341:650.999)
>(BinNumber=9)
> if (TAZDetermine>=651:3000)
>(BinNumber=10)
> BinNumber
> }
>
> #so in this case Bin 4 is selected
>
> #Based on previously selected bin, display second column
> value(Bin1_parprob).  Selected value in column 1 may be slightly larger but
> closest match is desirable.
>
> if (BinNumber==1)
>(Loc_Prop=Bin1Main.data[Bin1Main.data$Bin1_Acres >=Dev_Size,1])
> if (BinNumber==2)
>(Loc_Prop=Bin2Main.data[Bin2Main.data$Bin2_Acres>=Dev_Size,1])
> if (BinNumber==3)
>(Loc_Prop=Bin3Main.data[Bin3Main.data$Bin3_Acres>=Dev_Size,1])
> if (BinNumber==4)
>(Loc_Prop=Bin4Main.data[Bin4Main.data$Bin4_Acres>=Dev_Size,1])
> if (BinNumber==5)
>(Loc_Prop=Bin5Main.data[Bin5Main.data$Bin5_Acres>=Dev_Size,1])
> if (BinNumber==6)
>(Loc_Prop=Bin6Main.data[Bin6Main.data$Bin6_Acres>=Dev_Size,1])
> if (BinNumber==7)
>(Loc_Prop=Bin7Main.data[Bin7Main.data$Bin7_Acres>=Dev_Size,1])
> if (BinNumber==8)
>(Loc_Prop=Bin8Main.data[Bin8Main.data$Bin8_Acres>=Dev_Size,1])
> if (BinNumber==9)
>(Loc_Prop=Bin9Main.data[Bin9Main.data$Bin9_Acres>=Dev_Size,1])
> if (BinNumber==10)
>(Loc_Prop=Bin10Main.data[Bin10Main.data$Bin10_Acres>=Dev_Size,1])
>
> I hope this question is clear.  I have tried a number of different lines of
> code but get the same error (Numeric (0))
> Cheers,
> JR
> --
> View this message in context: 
> http://www.nabble.com/Error-in-table-lookup-tp19316307p19316307.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stepwise

2008-09-04 Thread Frank E Harrell Jr
Also consider the redun function in the Hmisc package, which does not 
use the response variable but uses flexible nonlinear additive models to 
predict each predictor variable from all the others, using a stepwise 
procedure in a formal redundancy analysis.


Frank


Ben Bolker wrote:

Peter Flom  brainscope.com> writes:


Robin Williams wrote

Is there any facility in R to perform a stepwise process on a model,
which will remove any highly-correlated explanatory variables? I am told
there is in SPSS. I have a large number of variables (some correlated),
which I would like to just chuck in to a model and perform stepwise and
see what comes out the other end, to give me an idea perhaps as to which
variables I should focus on.
Thanks for any help / suggestions.  
Stepwise is a bad method of selecting variables.  Far better methods are LASSO

and LAR (least angle

regression), available in the LARS package and the LASSO2 package.

However, while both these methods are good, neither is a substitute for

substantive knowledge.

Also, the key thing is not so much whether variables are correlated, but

whether they are co-linear, which

is different.  If you have a great many variables, then you  can have a high

degree of colinearity even with no
high pairwise correlations.  I've not done this in R, but 


RSiteSearch("collinearity", restrict = 'functions') yields 34 hits.

HTH

Peter



  Another suggestion would be to do PCA on the predictor variables.
And to read Frank Harrell's book on _Regression modeling strategies_.

   cheers
 Ben Bolker


Frank E Harrell Jr   Professor and Chair   School of Medicine
 Department of Biostatistics   Vanderbilt University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error in table lookup

2008-09-04 Thread PDXRugger

I am trying to lookup a value in 1 of 10 loaded two column-data sets (Bins)
by displaying the value of the second column based on the value of the
first. For instance in 

 Bin1_Acres   Bin1_parprobBin1_TAZ 
 [1,]   0.0004420.978  356   
 [2,]   0.0004530.954  356  
 [3,]   0.0005830.925  366  
 [4,]   0.0006350.893  403   
 [5,]   0.0007560.854  358   
 [6,]   0.0007740.815  530   
 [7,]   0.0008130.773  405   
 [8,]   0.0009700.724  576   
 [9,]   0.0010220.672  569   
[10,]   0.0010660.618  620 

I would like to display the column value on Bin1_parprob based on the
closest match to Bin1_acres.  So if the value i am referenceing in column 1
(Bin1_Acres) is .00 the value outputted would be .672.   I keep
getting a numeric(0) error with the code i am using(see below).  I think the
issue is i usually dont have an exact match and i am using a >= sign which
may be causing the problem.   I need the closest match and it needs to be
larger, not smaller if not exact match is found( which is very common).  

#Test value for Vacant acres in TAZ 
TAZDetermine = 24

#Test value for and development size
Dev_Size= 3.5

#Determines Bin number based on vacant acres in TAZ
BinSize=function(Dev_Size,BinNumer){
if (TAZDetermine<=3.999)
BinNumber=1
if (TAZDetermine>=4:6.999)
(BinNumber=2)
if (TAZDetermine >=10:16.999)
(BinNumber=3)
if (TAZDetermine>=17:27.999)
(BinNumber=4)
if (TAZDetermine>=28:49.999)
(BinNumber=5)
if (TAZDetermine>=50:90.999)
(BinNumber=6)
if (TAZDetermine>=91:150.999)
(BinNumber=7)
if (TAZDetermine>=151:340.999)
(BinNumber=8)
if (TAZDetermine>=341:650.999)
(BinNumber=9)
if (TAZDetermine>=651:3000)
(BinNumber=10)
BinNumber
}

#so in this case Bin 4 is selected

#Based on previously selected bin, display second column
value(Bin1_parprob).  Selected value in column 1 may be slightly larger but
closest match is desirable.   

if (BinNumber==1)
(Loc_Prop=Bin1Main.data[Bin1Main.data$Bin1_Acres >=Dev_Size,1])
if (BinNumber==2)
(Loc_Prop=Bin2Main.data[Bin2Main.data$Bin2_Acres>=Dev_Size,1])
if (BinNumber==3)
(Loc_Prop=Bin3Main.data[Bin3Main.data$Bin3_Acres>=Dev_Size,1])
if (BinNumber==4)
(Loc_Prop=Bin4Main.data[Bin4Main.data$Bin4_Acres>=Dev_Size,1])
if (BinNumber==5)
(Loc_Prop=Bin5Main.data[Bin5Main.data$Bin5_Acres>=Dev_Size,1])
if (BinNumber==6)
(Loc_Prop=Bin6Main.data[Bin6Main.data$Bin6_Acres>=Dev_Size,1])
if (BinNumber==7)
(Loc_Prop=Bin7Main.data[Bin7Main.data$Bin7_Acres>=Dev_Size,1])
if (BinNumber==8)
(Loc_Prop=Bin8Main.data[Bin8Main.data$Bin8_Acres>=Dev_Size,1])
if (BinNumber==9)
(Loc_Prop=Bin9Main.data[Bin9Main.data$Bin9_Acres>=Dev_Size,1])
if (BinNumber==10)
(Loc_Prop=Bin10Main.data[Bin10Main.data$Bin10_Acres>=Dev_Size,1])

I hope this question is clear.  I have tried a number of different lines of
code but get the same error (Numeric (0))  
Cheers, 
JR
-- 
View this message in context: 
http://www.nabble.com/Error-in-table-lookup-tp19316307p19316307.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] combning rows

2008-09-04 Thread kayj

I do have a txt  file where each row is a record and the first element of
each record is an id for an individual. I am looking into combing the
records into one row if the id is the same and save as a txt file. Notice
that the rows may not have the same length in the result file that I want to
create .  Can I do this using R. I appreciate your help.

Thanks   

-- 
View this message in context: 
http://www.nabble.com/combning-rows-tp19316079p19316079.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stepwise

2008-09-04 Thread Ben Bolker
Peter Flom  brainscope.com> writes:

> 
> Robin Williams wrote
> 
> Is there any facility in R to perform a stepwise process on a model,
> which will remove any highly-correlated explanatory variables? I am told
> there is in SPSS. I have a large number of variables (some correlated),
> which I would like to just chuck in to a model and perform stepwise and
> see what comes out the other end, to give me an idea perhaps as to which
> variables I should focus on.
> Thanks for any help / suggestions.  
> >>>
> 
> Stepwise is a bad method of selecting variables.  Far better methods are LASSO
and LAR (least angle
> regression), available in the LARS package and the LASSO2 package.
> 
> However, while both these methods are good, neither is a substitute for
substantive knowledge.
> 
> Also, the key thing is not so much whether variables are correlated, but
whether they are co-linear, which
> is different.  If you have a great many variables, then you  can have a high
degree of colinearity even with no
> high pairwise correlations.  I've not done this in R, but 
> 
> RSiteSearch("collinearity", restrict = 'functions') yields 34 hits.
> 
> HTH
> 
> Peter
> 

  Another suggestion would be to do PCA on the predictor variables.
And to read Frank Harrell's book on _Regression modeling strategies_.

   cheers
 Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error with update.packages()

2008-09-04 Thread Brant Inman
R-helpers:

I just updated from R 7.0 to R 7.2.2 today.  I am using MAC OS X  
version 10.5.4 on a Macbook to run R.



 > sessionInfo()
R version 2.7.2 (2008-08-25)
i386-apple-darwin8.11.1

locale:
en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_2.7.2



When I tried to update my packages, I got the following messages:



 > update.packages()
Amelia :
  Version 1.1-30 installed in /Library/Frameworks/R.framework/ 
Resources/library
  Version 1.1-33 available at http://www.ibiblio.org/pub/languages/R/ 
CRAN
Update (y/N/c)?  y
empiricalBayes :
  Version 2.0 installed in /Library/Frameworks/R.framework/Resources/ 
library
  Version 2.1 available at http://www.ibiblio.org/pub/languages/R/CRAN
Update (y/N/c)?  y
Matrix :
  Version 0.999375-9 installed in /Library/Frameworks/R.framework/ 
Resources/library
  Version 0.999375-11 available at http://www.ibiblio.org/pub/languages/R/CRAN
Update (y/N/c)?  y

...

also installing the dependencies ‘sandwich’, ‘strucchange’, ‘vcd’,  
‘rrcov’, ‘spam’

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/sandwich_2.1-0.tgz'
Content type 'application/x-gzip' length 678518 bytes (662 Kb)
opened URL
==
downloaded 662 Kb

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/strucchange_1.3-3.tgz'
Content type 'application/x-gzip' length 777150 bytes (758 Kb)
opened URL
==
downloaded 758 Kb

...

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/rrcov_0.4-06.tgz'
Content type 'application/x-gzip' length 365965 bytes (357 Kb)
opened URL
==
downloaded 304 Kb

trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/spam_0.15-0.tgz'
Error in download.file(url, destfile, method, mode = "wb", ...) :
   cannot open URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/spam_0.15-0.tgz'
In addition: Warning messages:
1: In download.file(url, destfile, method, mode = "wb", ...) :
   downloaded length 312279 != reported length 365965
2: In download.file(url, destfile, method, mode = "wb", ...) :
   unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =  
available,  :
   download of package 'spam' failed
trying URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/Amelia_1.1-33.tgz'
Error in download.file(url, destfile, method, mode = "wb", ...) :
   cannot open URL 
'http://www.ibiblio.org/pub/languages/R/CRAN/bin/macosx/universal/contrib/2.7/Amelia_1.1-33.tgz'
In addition: Warning message:
In download.file(url, destfile, method, mode = "wb", ...) :
   unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =  
available,  :
   download of package 'Amelia' failed

...

In addition: Warning message:
In download.file(url, destfile, method, mode = "wb", ...) :
   unable to connect to 'mirrors.ibiblio.org' on port 80.
Warning in download.packages(p0, destdir = tmpd, available =  
available,  :
   download of package 'zoo' failed

gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
Error in sprintf(gettext(fmt, domain = domain), ...) :
   argument is missing, with no default



The first part of the procedure seemed to work, then came page after  
page of error messages.  Can someone tell me what is going on here and  
how I can fix the problem?  I found a similar report on R-help that  
was never resolved and also involved a MAC 
(http://tolstoy.newcastle.edu.au/R/e4/help/08/06/15556.html 
).  Could this be a MAC issue?

Thanks very much for your time.

Brant Inman
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help on jarque test

2008-09-04 Thread Ben Bolker
giov  libero.it> writes:

> 
> 
> Hi all,
> I used the function jarque.test (in the moments package) on my data set and
> I obtained something like this: 
> 
>  Jarque-Bera Normality Test
> 
> data:  x 
> JB = 4.8381, p-value = 0.089
> alternative hypothesis: greater 
> 
> or 
> 
>  Jarque-Bera Normality Test
> 
> data:  x 
> JB = 2.6018, p-value = 0.2723
> alternative hypothesis: greater 
> 
> I cannot understand this. Please, someone can help me?

  You need to give us more information. What don't you
understand? Depending on your precise question,
you may need to look up information about
the Jarque-Bera test (a reference is given in the help
file), or ask a local consultant more about the statistical
issues involved, or perhaps read more about statistical
hypothesis testing in general ... ?

Generally speaking, neither of the examples you show above
gives strong indication of non-normality.  In the first case,
there is a 9% chance under the null hypothesis that the
test statistic would be as greater than or greater than that
observed in the data; in the second case that probability
is 27%.

   Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Hazard Rate Confidence Intervals

2008-09-04 Thread Alan Cox
Hello, 
Is it possible to create confidence intervals for hazard rates?  I'm creating 
two muhaz objects: 


haz1 <- muhaz(NumDaysCustomer[cRV=="true"],status[cRV=="true"]) 
haz2 <- muhaz(NumDaysCustomer[cRV=="false"],status[cRV=="false"]) 


and plotting them.  There are many, many more observations in the cohort 
cRV=="false" than =="true".  And, there are many more observations with 
lifetimes in the middle of the range than at the ends.  I suspect that this is 
a common phenomenon. 


When I plot the two hazard rate curves, haz1 looks very different than haz2, 
but I'd like to see if it's different enough.  How can I go about creating 
confidence intervals and plotting them? 


Thanks, 
Alan 

-- 
Alan Cox 
Director, User Experience 
iContact, Corp. 
p 919.459.1038 f 919.287.2475 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpolation Function f(y)

2008-09-04 Thread ermimi

Thank you very much, You have helped me to resolve the problem. 
Thank you!!

A greetings, Luismi




Henrique Dallazuanna wrote:
> 
> I think that you can use the splinefun function:
> 
> f <- splinefun(x, y)
> 
> f(15)
> 
> On Thu, Sep 4, 2008 at 1:52 PM, ermimi <[EMAIL PROTECTED]> wrote:
> 
>>
>> Hello friends!!!
>>
>> I have a list of values called y.
>> The list is y=c(221.0, 212.0, 206.0, 202.7, 198.4, 195.1, 192.2, 189.7,
>> 187.6, 185.8);
>> y is f(x) and x=c(10,20,30,40,50,60,70,80,90,100).
>>
>> I only have x and y. I don´t know f(x). I would like interpolate f(x) to
>> obtain other values as f(15), f(25), f(35) and if it was possible obtain
>> f(110), f(120).
>>
>> is there any function that allow me obtain this values??
>>
>> Thank you very much, Luismi
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Interpolation-Function-f%28y%29-tp19314985p19314985.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
> 
> 
> 
> -- 
> Henrique Dallazuanna
> Curitiba-Paraná-Brasil
> 25° 25' 40" S 49° 16' 22" O
> 
>   [[alternative HTML version deleted]]
> 
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Interpolation-Function-f%28y%29-tp19314985p19315377.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpolation Function f(y)

2008-09-04 Thread Henrique Dallazuanna
I think that you can use the splinefun function:

f <- splinefun(x, y)

f(15)

On Thu, Sep 4, 2008 at 1:52 PM, ermimi <[EMAIL PROTECTED]> wrote:

>
> Hello friends!!!
>
> I have a list of values called y.
> The list is y=c(221.0, 212.0, 206.0, 202.7, 198.4, 195.1, 192.2, 189.7,
> 187.6, 185.8);
> y is f(x) and x=c(10,20,30,40,50,60,70,80,90,100).
>
> I only have x and y. I don´t know f(x). I would like interpolate f(x) to
> obtain other values as f(15), f(25), f(35) and if it was possible obtain
> f(110), f(120).
>
> is there any function that allow me obtain this values??
>
> Thank you very much, Luismi
>
>
>
>
> --
> View this message in context:
> http://www.nabble.com/Interpolation-Function-f%28y%29-tp19314985p19314985.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Interpolation Function f(y)

2008-09-04 Thread ermimi

Hello friends!!!

I have a list of values called y.
The list is y=c(221.0, 212.0, 206.0, 202.7, 198.4, 195.1, 192.2, 189.7,
187.6, 185.8);
y is f(x) and x=c(10,20,30,40,50,60,70,80,90,100). 

I only have x and y. I don´t know f(x). I would like interpolate f(x) to
obtain other values as f(15), f(25), f(35) and if it was possible obtain
f(110), f(120).

is there any function that allow me obtain this values??

Thank you very much, Luismi




-- 
View this message in context: 
http://www.nabble.com/Interpolation-Function-f%28y%29-tp19314985p19314985.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R base 2.7.2 packaged for Mandriva 2008.1 x86_64: anyone interested?

2008-09-04 Thread Corrado
I have packaged R base 2.7.2 for Mandriva 2008.1 x86_64  who should I send 
it to so that it can be made available to everybody? 

It is my first attempt and it works well on my computer, but it will need some 
testing.

Best,
-- 
Corrado Topi

Global Climate Change & Biodiversity Indicators
Area 18,Department of Biology
University of York, York, YO10 5YW, UK
Phone: + 44 (0) 1904 328645, E-mail: [EMAIL PROTECTED]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting using 'if' statements

2008-09-04 Thread Henrique Dallazuanna
Use '&':

plot(January[January[,3] > 0 & January[,3] < 2, 3:4])

On Thu, Sep 4, 2008 at 1:47 PM, Steve Murray <[EMAIL PROTECTED]> wrote:

>
> Ah that's great, thank you very much.
>
> As a follow-on, in the same format, how would I plot where column 3 is
> greater than 0 *but also less than 2*?
>
> Once again, any help is much appreciated.
>
>
> Thanks,
>
> Steve
>
> 
> Date: Thu, 4 Sep 2008 13:39:12 -0300
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: [R] Plotting using 'if' statements
> CC: r-help@r-project.org
>
> Try this:
>
> plot(January[January[,3]> 0, 3:4])
>
> On Thu, Sep 4, 2008 at 1:28 PM, Steve Murray  wrote:
>
>
>
>
>
> Dear all,
>
>
>
> I have a dataset of four columns, and I wish to plot (as a scatter graph)
> the values of the third column where the values are greater than zero, and
> the fourth column.
>
>
>
> I tried doing this via the plot command itself, but got into a bit of a
> mess (resulting in errors!). My dataframe is called 'January':
>
>
>
> > plot(January[3(>0):4])
>
> Error: unexpected '>' in "plot(January[3(>"
>
>
>
>
>
> After a few variations on this, I thought I'd try making a new object which
> includes all values from the third column of January>0 (to plot in a
> separate step) as follows:
>
>
>
> > JanFilter <- January[3]>0
>
>
>
> No error here. However, when I display the 'values' of JanFilter, it shows
> that instead of keeping the numerical values, the above operation simply
> displays the results of the logical test:
>
>
>
> > head(JanFilter)
>
>Value
>
> [1,]   FALSE
>
> [2,]TRUE
>
> [3,]TRUE
>
> [4,]TRUE
>
> [5,]TRUE
>
> [6,]TRUE
>
>
>
>
>
> This is obviously no good for plotting the numerical values on axes!
>
>
>
> So my question is, how do I perform 'if' statements in order to filter out
> various parts of a dataset, for plotting on a graph.
>
>
>
> Many thanks,
>
>
>
> Steve
>
>
>
> __
>
> R-help@r-project.org mailing list
>
> https://stat.ethz.ch/mailman/listinfo/r-help
>
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>
> and provide commented, minimal, self-contained, reproducible code.
>
>
>
>
> --
> Henrique Dallazuanna
> Curitiba-Paraná-Brasil
> 25° 25' 40" S 49° 16' 22" O
>
>
>
> _
> Make a mini you and download it into Windows Live Messenger
> http://clk.atdmt.com/UKM/go/111354029/direct/01/




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting using 'if' statements

2008-09-04 Thread Steve Murray

Ah that's great, thank you very much.

As a follow-on, in the same format, how would I plot where column 3 is greater 
than 0 *but also less than 2*?

Once again, any help is much appreciated.


Thanks,

Steve


Date: Thu, 4 Sep 2008 13:39:12 -0300
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: [R] Plotting using 'if' statements
CC: r-help@r-project.org

Try this:

plot(January[January[,3]> 0, 3:4])

On Thu, Sep 4, 2008 at 1:28 PM, Steve Murray  wrote:





Dear all,



I have a dataset of four columns, and I wish to plot (as a scatter graph) the 
values of the third column where the values are greater than zero, and the 
fourth column.



I tried doing this via the plot command itself, but got into a bit of a mess 
(resulting in errors!). My dataframe is called 'January':



> plot(January[3(>0):4])

Error: unexpected '>' in "plot(January[3(>"





After a few variations on this, I thought I'd try making a new object which 
includes all values from the third column of January>0 (to plot in a separate 
step) as follows:



> JanFilter <- January[3]>0



No error here. However, when I display the 'values' of JanFilter, it shows that 
instead of keeping the numerical values, the above operation simply displays 
the results of the logical test:



> head(JanFilter)

Value

[1,]   FALSE

[2,]TRUE

[3,]TRUE

[4,]TRUE

[5,]TRUE

[6,]TRUE





[[elided Hotmail spam]]



So my question is, how do I perform 'if' statements in order to filter out 
various parts of a dataset, for plotting on a graph.



Many thanks,



Steve



__

R-help@r-project.org mailing list



PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.




--
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] saving an object

2008-09-04 Thread Henrik Bengtsson
Alternatively you can use saveObject() and loadObject() of R.utils -
that will not hardwire the name of the loaded object avoiding name
conflicts, e.g.

library("R.utils");
foo <- 1:10;
saveObject(foo, file="foo.RData");
bar <- loadObject("foo.RData");

/HB

On Thu, Sep 4, 2008 at 8:44 AM, Williams, Robin
<[EMAIL PROTECTED]> wrote:
> ?save
>  I assume you can connect to it from within the script, presumably by
> supplying the path of the object to the appropriate argument in your
> script. I'm no expert though.
> HTH,
>
>
> Robin Williams
> Met Office summer intern - Health Forecasting
> [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> On Behalf Of Rajasekaramya
> Sent: Thursday, September 04, 2008 4:36 PM
> To: r-help@r-project.org
> Subject: [R] saving an object
>
>
> Hi there,
>
> I have a dataset stored in an object which has very huge volume of
> rows.I want to  reuse it for comparing with other datasets.I dont want
> it to reload every time i run the script.Is there a way of saving a
> particular loaded object in the workspace and reusing it.
>
> Kindly help me.
>
> Ramya Thulasingam
> --
> View this message in context:
> http://www.nabble.com/saving-an-object-tp19313276p19313276.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting using 'if' statements

2008-09-04 Thread Henrique Dallazuanna
Try this:

plot(January[January[,3] > 0, 3:4])

On Thu, Sep 4, 2008 at 1:28 PM, Steve Murray <[EMAIL PROTECTED]> wrote:

>
>
> Dear all,
>
> I have a dataset of four columns, and I wish to plot (as a scatter graph)
> the values of the third column where the values are greater than zero, and
> the fourth column.
>
> I tried doing this via the plot command itself, but got into a bit of a
> mess (resulting in errors!). My dataframe is called 'January':
>
> > plot(January[3(>0):4])
> Error: unexpected '>' in "plot(January[3(>"
>
>
> After a few variations on this, I thought I'd try making a new object which
> includes all values from the third column of January>0 (to plot in a
> separate step) as follows:
>
> > JanFilter <- January[3]>0
>
> No error here. However, when I display the 'values' of JanFilter, it shows
> that instead of keeping the numerical values, the above operation simply
> displays the results of the logical test:
>
> > head(JanFilter)
> Value
> [1,]   FALSE
> [2,]TRUE
> [3,]TRUE
> [4,]TRUE
> [5,]TRUE
> [6,]TRUE
>
>
> This is obviously no good for plotting the numerical values on axes!
>
> So my question is, how do I perform 'if' statements in order to filter out
> various parts of a dataset, for plotting on a graph.
>
> Many thanks,
>
> Steve
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] table and colnames

2008-09-04 Thread Gabor Grothendieck
See http://gsubfn.googlecode.com for more on the gsubfn package.
Below we use strapply to pick out every string of digits and decimal points
converting each to numeric returning a list of pairs and rbinding that list
into a matrix.

> x <- c("[10.839,10.841]",  "(10.841,10.843]", "(10.843,10.846]", 
> "(10.846,10.848]", "(10.848,10.85]")
> library(gsubfn)
> strapply(x, "[0-9.]+", as.numeric, simplify = rbind)
   [,1]   [,2]
[1,] 10.839 10.841
[2,] 10.841 10.843
[3,] 10.843 10.846
[4,] 10.846 10.848
[5,] 10.848 10.850


On Thu, Sep 4, 2008 at 9:20 AM, Trubisz, Joseph
<[EMAIL PROTECTED]> wrote:
> I have a table statement that returns the following:
>
>
>
> [10.839,10.841] (10.841,10.843] (10.843,10.846] (10.846,10.848]
> (10.848,10.85]
>
>  0   0   0   0
> 1
>
>  (10.85,10.852] (10.852,10.854] (10.854,10.857] (10.857,10.859]
> (10.859,10.861]
>
>  0   0   0   0
> 0
>
>
>
> What I want to do is get the upper bound value into a vector for each of
> the 10-buckets.
>
> (e.g. v<-c(10.841,10.843...)
>
> I'm totally clueless on how to proceed.
>
> Any help appreciated.
>
>
>
> TIA,
>
> Joe
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plotting using 'if' statements

2008-09-04 Thread Steve Murray


Dear all,

I have a dataset of four columns, and I wish to plot (as a scatter graph) the 
values of the third column where the values are greater than zero, and the 
fourth column. 

I tried doing this via the plot command itself, but got into a bit of a mess 
(resulting in errors!). My dataframe is called 'January':

> plot(January[3(>0):4])
Error: unexpected '>' in "plot(January[3(>"


After a few variations on this, I thought I'd try making a new object which 
includes all values from the third column of January>0 (to plot in a separate 
step) as follows:

> JanFilter <- January[3]>0

No error here. However, when I display the 'values' of JanFilter, it shows that 
instead of keeping the numerical values, the above operation simply displays 
the results of the logical test:

> head(JanFilter)
 Value
[1,]   FALSE
[2,]TRUE
[3,]TRUE
[4,]TRUE
[5,]TRUE
[6,]TRUE


This is obviously no good for plotting the numerical values on axes!

So my question is, how do I perform 'if' statements in order to filter out 
various parts of a dataset, for plotting on a graph.

Many thanks,

Steve

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A ternary graph's scales

2008-09-04 Thread Marcin Kozak
Thank you, Greg. After all I noticed that in ternaryplot it can be
easily made by setting a 'scale' parameter.

Thanks,
Marcin

On Thu, Sep 4, 2008 at 6:00 PM, Greg Snow <[EMAIL PROTECTED]> wrote:
> There is also the tri function in the cwhtool package, triax.plot in the 
> plotrix package, and triplot in the TeachingDemos package (I think it is 
> between this plot and progress bars as to what functionality is reproduced in 
> the most packages).  One of the others may do what you want, or be modifiable 
> to do what you want.
>
> The triplot (TeachingDemos) does not put scales on by default, but does had 
> an 'add' argument that could possibly be used to add scales after the fact, 
> or the code is fairly short and simple to see what the transformation is to 
> add scale manually.
>
> Hope this helps,
>
> --
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> [EMAIL PROTECTED]
> (801) 408-8111
>
>
>
>> -Original Message-
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Marcin Kozak
>> Sent: Thursday, September 04, 2008 8:34 AM
>> To: r-help@r-project.org
>> Subject: [R] A ternary graph's scales
>>
>> Hi all,
>>
>> I am drawing a ternary graph. Everything is fine with both
>> ternaryplot (package vcd) and triangle.plot (package ade4),
>> but I want to present scales in neither percents nor from 0
>> to 1 (this is actually the only option I found in both
>> functions). I want the scales to be in a natural scale (from
>> 0 to k, k being the number of objects). Is it at all
>> possible? (Descriptions of both functions do not mention this
>> possibility.)
>>
>> Thanks,
>> Marcin
>>
>> --
>> "Build up your weaknesses until they become your strong
>> points" -- Knute Rockne
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] table and colnames

2008-09-04 Thread jim holtman
Here is one way:

> y <- levels(cut(x,5))
> y
[1] "(0.0124,0.209]" "(0.209,0.405]"  "(0.405,0.601]"  "(0.601,0.797]"
 "(0.797,0.993]"
> sapply(y, function(x) sub(".*,(.*)]", "\\1", x))
(0.0124,0.209]  (0.209,0.405]  (0.405,0.601]  (0.601,0.797]  (0.797,0.993]
   "0.209""0.405""0.601""0.797""0.993"
> as.numeric(sapply(y, function(x) sub(".*,(.*)]", "\\1", x)))
[1] 0.209 0.405 0.601 0.797 0.993
>


On Thu, Sep 4, 2008 at 9:20 AM, Trubisz, Joseph
<[EMAIL PROTECTED]> wrote:
> I have a table statement that returns the following:
>
>
>
> [10.839,10.841] (10.841,10.843] (10.843,10.846] (10.846,10.848]
> (10.848,10.85]
>
>  0   0   0   0
> 1
>
>  (10.85,10.852] (10.852,10.854] (10.854,10.857] (10.857,10.859]
> (10.859,10.861]
>
>  0   0   0   0
> 0
>
>
>
> What I want to do is get the upper bound value into a vector for each of
> the 10-buckets.
>
> (e.g. v<-c(10.841,10.843...)
>
> I'm totally clueless on how to proceed.
>
> Any help appreciated.
>
>
>
> TIA,
>
> Joe
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] table and colnames

2008-09-04 Thread Henrique Dallazuanna
Try this:

v <- as.numeric(gsub("]|\\)", "", unlist(lapply(strsplit(names(tb), ","),
"[", 2

Where tb is your table

On Thu, Sep 4, 2008 at 10:20 AM, Trubisz, Joseph <
[EMAIL PROTECTED]> wrote:

> I have a table statement that returns the following:
>
>
>
> [10.839,10.841] (10.841,10.843] (10.843,10.846] (10.846,10.848]
> (10.848,10.85]
>
>  0   0   0   0
> 1
>
>  (10.85,10.852] (10.852,10.854] (10.854,10.857] (10.857,10.859]
> (10.859,10.861]
>
>  0   0   0   0
> 0
>
>
>
> What I want to do is get the upper bound value into a vector for each of
> the 10-buckets.
>
> (e.g. v<-c(10.841,10.843...)
>
> I'm totally clueless on how to proceed.
>
> Any help appreciated.
>
>
>
> TIA,
>
> Joe
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A ternary graph's scales

2008-09-04 Thread Greg Snow
There is also the tri function in the cwhtool package, triax.plot in the 
plotrix package, and triplot in the TeachingDemos package (I think it is 
between this plot and progress bars as to what functionality is reproduced in 
the most packages).  One of the others may do what you want, or be modifiable 
to do what you want.

The triplot (TeachingDemos) does not put scales on by default, but does had an 
'add' argument that could possibly be used to add scales after the fact, or the 
code is fairly short and simple to see what the transformation is to add scale 
manually.

Hope this helps,

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
(801) 408-8111



> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Marcin Kozak
> Sent: Thursday, September 04, 2008 8:34 AM
> To: r-help@r-project.org
> Subject: [R] A ternary graph's scales
>
> Hi all,
>
> I am drawing a ternary graph. Everything is fine with both
> ternaryplot (package vcd) and triangle.plot (package ade4),
> but I want to present scales in neither percents nor from 0
> to 1 (this is actually the only option I found in both
> functions). I want the scales to be in a natural scale (from
> 0 to k, k being the number of objects). Is it at all
> possible? (Descriptions of both functions do not mention this
> possibility.)
>
> Thanks,
> Marcin
>
> --
> "Build up your weaknesses until they become your strong
> points" -- Knute Rockne
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] table and colnames

2008-09-04 Thread Trubisz, Joseph
I have a table statement that returns the following:

 

[10.839,10.841] (10.841,10.843] (10.843,10.846] (10.846,10.848]
(10.848,10.85]

  0   0   0   0
1

 (10.85,10.852] (10.852,10.854] (10.854,10.857] (10.857,10.859]
(10.859,10.861]

  0   0   0   0
0

 

What I want to do is get the upper bound value into a vector for each of
the 10-buckets.

(e.g. v<-c(10.841,10.843...)

I'm totally clueless on how to proceed.

Any help appreciated.

 

TIA,

Joe


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] for 1:length and removing rows

2008-09-04 Thread Ted Harding
On 04-Sep-08 15:33:06, Markus Mühlbacher wrote:
> Hello everyone,
> I have a problem using a for-loop to go through a matrix. I want to
> remove all rows with a sum of 0.
> What I do is basically:
> 
> for(i in 1:length(data))
> {
>if(sum(add(data[i,]) == 0)
>{
>   data <- data[-i,]
>}
> }
> 
> I get a error message: "Error in `[.data.frame`(data, i) : undefined
> columns selected", because the length is reduced when removing in the
> if-clause.
> 
> How do I make this?

The function rowSums() gives you the row-sums of a matrix, which
you can test for "==0", and carry on from there. Example:

  M<-matrix(c(1,2,3,0,-1,-2,-3,1,-2,3,0,1,2,-1),ncol=2)

  M
#  [,1] [,2]
# [1,]11
# [2,]2   -2
# [3,]33
# [4,]00
# [5,]   -11
# [6,]   -22
# [7,]   -3   -1

  ix<-which(rowSums(M)==0)

  M[-ix,]
#  [,1] [,2]
# [1,]11
# [2,]33
# [3,]   -3   -1

Hoping this helps!
Ted.


E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 04-Sep-08   Time: 16:49:46
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stepwise

2008-09-04 Thread Peter Flom
Robin Williams wrote

Is there any facility in R to perform a stepwise process on a model,
which will remove any highly-correlated explanatory variables? I am told
there is in SPSS. I have a large number of variables (some correlated),
which I would like to just chuck in to a model and perform stepwise and
see what comes out the other end, to give me an idea perhaps as to which
variables I should focus on.
Thanks for any help / suggestions.  
>>>

Stepwise is a bad method of selecting variables.  Far better methods are LASSO 
and LAR (least angle regression), available in the LARS package and the LASSO2 
package.

However, while both these methods are good, neither is a substitute for 
substantive knowledge.

Also, the key thing is not so much whether variables are correlated, but 
whether they are co-linear, which is different.  If you have a great many 
variables, then you  can have a high degree of colinearity even with no high 
pairwise correlations.  I've not done this in R, but 

RSiteSearch("collinearity", restrict = 'functions') yields 34 hits.


HTH

Peter

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] saving an object

2008-09-04 Thread Williams, Robin
?save
  I assume you can connect to it from within the script, presumably by
supplying the path of the object to the appropriate argument in your
script. I'm no expert though.
HTH,  


Robin Williams 
Met Office summer intern - Health Forecasting 
[EMAIL PROTECTED] 
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Rajasekaramya
Sent: Thursday, September 04, 2008 4:36 PM
To: r-help@r-project.org
Subject: [R] saving an object


Hi there,

I have a dataset stored in an object which has very huge volume of
rows.I want to  reuse it for comparing with other datasets.I dont want
it to reload every time i run the script.Is there a way of saving a
particular loaded object in the workspace and reusing it.

Kindly help me.

Ramya Thulasingam
--
View this message in context:
http://www.nabble.com/saving-an-object-tp19313276p19313276.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] saving an object

2008-09-04 Thread Rajasekaramya

Hi there,

I have a dataset stored in an object which has very huge volume of rows.I
want to  reuse it for comparing with other datasets.I dont want it to reload
every time i run the script.Is there a way of saving a particular loaded
object in the workspace and reusing it.

Kindly help me.

Ramya Thulasingam
-- 
View this message in context: 
http://www.nabble.com/saving-an-object-tp19313276p19313276.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] for 1:length and removing rows

2008-09-04 Thread Markus Mühlbacher
Hello everyone,

I have a problem using a for-loop to go through a matrix. I want to remove all 
rows with a sum of 0.
What I do is basically:

for(i in 1:length(data))
{
   if(sum(add(data[i,]) == 0)
   {
  data <- data[-i,]
   }
}

I get a error message: "Error in `[.data.frame`(data, i) : undefined columns 
selected", because the length is reduced when removing in the if-clause.

How do I make this?

Best wishes,
Markus





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help on jarque test

2008-09-04 Thread giov

Hi all,
I used the function jarque.test (in the moments package) on my data set and
I obtained something like this: 


 Jarque-Bera Normality Test

data:  x 
JB = 4.8381, p-value = 0.089
alternative hypothesis: greater 



or 



 Jarque-Bera Normality Test

data:  x 
JB = 2.6018, p-value = 0.2723
alternative hypothesis: greater 


I cannot understand this. Please, someone can help me?


thank you 

giov


-- 
View this message in context: 
http://www.nabble.com/help-on-jarque-test-tp19312270p19312270.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Stepwise

2008-09-04 Thread Williams, Robin
Hi,
Is there any facility in R to perform a stepwise process on a model,
which will remove any highly-correlated explanatory variables? I am told
there is in SPSS. I have a large number of variables (some correlated),
which I would like to just chuck in to a model and perform stepwise and
see what comes out the other end, to give me an idea perhaps as to which
variables I should focus on.
Thanks for any help / suggestions.  
Robin Williams
Met Office summer intern - Health Forecasting
[EMAIL PROTECTED] 
 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pass data to log-likelihood function

2008-09-04 Thread Duncan Murdoch

On 9/4/2008 10:54 AM, Jinsong Zhao wrote:

Hi there,

When I do bootstrap on a maximum likelihood estimation, I try the
following code, however, I get error:

Error in minuslogl(alpha = 0, beta = 0) : object "x" not found

It seems that mle() only get data from workspace, other than the
boot.fun().


This is the way R does scoping.  Your ll function was defined in the 
global environment, so that's where it will look for x, n and r.  If you 
want it to look at the local variables in boot.fun, then you should 
define it in boot.fun.


Duncan Murdoch


My question is how to pass the data to mle() in my case.

I really appreciated to any suggestions.

Best wishes,
Jinsong

#---code start here---
x <- c(32, 16, 8, 4, 2, 1)
r <- c(20, 12, 10, 8, 6, 0)
n <- c(20, 20, 20, 20, 20, 20)

mydata <- data.frame(x = x, r = r, n = n)
rm(x, r, n) #if not rmed, it will affect the final result.


ll <- function(alpha, beta) { #how to pass the data to this function?
  x <- log10(x)
  P <- pnorm(alpha + beta * x)
  P <- pmax(pmin(P,1),0)
  -(sum(r * log(P)) + sum((n - r)* log(1-P)))
}

boot.fun <- function(data, index) {
boot.data <- data[index, ]
# it seems that the following three line dose nothing with the mle()
x <- boot.data$x
r <- boot.data$r
n <- boot.data$n
fit <- mle(ll, start = list(alpha = 0, beta = 0), method = "BFGS")
boot.coef <- coef(fit)
stats <- -boot.coef[1] / boot.coef[2]
}

library(stats4)
library(boot)

myboot <- boot(mydata, boot.fun, R = 199)
# give the error message:
# Error in minuslogl(alpha = 0, beta = 0) : object "x" not found
#---code end here---

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] pass data to log-likelihood function

2008-09-04 Thread Jinsong Zhao
Hi there,

When I do bootstrap on a maximum likelihood estimation, I try the
following code, however, I get error:

Error in minuslogl(alpha = 0, beta = 0) : object "x" not found

It seems that mle() only get data from workspace, other than the
boot.fun().

My question is how to pass the data to mle() in my case.

I really appreciated to any suggestions.

Best wishes,
Jinsong

#---code start here---
x <- c(32, 16, 8, 4, 2, 1)
r <- c(20, 12, 10, 8, 6, 0)
n <- c(20, 20, 20, 20, 20, 20)

mydata <- data.frame(x = x, r = r, n = n)
rm(x, r, n) #if not rmed, it will affect the final result.


ll <- function(alpha, beta) { #how to pass the data to this function?
  x <- log10(x)
  P <- pnorm(alpha + beta * x)
  P <- pmax(pmin(P,1),0)
  -(sum(r * log(P)) + sum((n - r)* log(1-P)))
}

boot.fun <- function(data, index) {
boot.data <- data[index, ]
# it seems that the following three line dose nothing with the mle()
x <- boot.data$x
r <- boot.data$r
n <- boot.data$n
fit <- mle(ll, start = list(alpha = 0, beta = 0), method = "BFGS")
boot.coef <- coef(fit)
stats <- -boot.coef[1] / boot.coef[2]
}

library(stats4)
library(boot)

myboot <- boot(mydata, boot.fun, R = 199)
# give the error message:
# Error in minuslogl(alpha = 0, beta = 0) : object "x" not found
#---code end here---

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls convergence trouble

2008-09-04 Thread Gabor Grothendieck
> # 1. Supplying the derivatives results in convergence:

> lgmg <- function(a, b, C0, m, V) {
+ e <- expression((V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 *
+ C0 * b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m *
+ a * b + (b * m * a)^2))/(2 * b * m))
+ val <- eval(e)
+ attr(val, "gradient") <- cbind(a = eval(D(e, "a")), b = eval(D(e, "b")))
+ val
+ }
> nls(Qe ~ lgmg(a, b, C0, m, V), bois.DATA, start = c(a = 300, b = 1))
Nonlinear regression model
  model:  Qe ~ lgmg(a, b, C0, m, V)
   data:  bois.DATA
a b
337.74912   0.03864
 residual sum-of-squares: 15473

Number of iterations to convergence: 9
Achieved convergence tolerance: 3.16e-06

> # 2. As mentioned before squaring both sides results in convergence:
> # even without derivatives
> # though admittedly that is a slightly different objective.
>
> lgm <- function(a, b, C0, m, V) {
+ (V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 *
+ C0 * b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m *
+ a * b + (b * m * a)^2))/(2 * b * m)
+ }
> nls(Qe^2 ~ lgm(a, b, C0, m, V)^2, bois.DATA, start = c(a = 300, b = 1))
Nonlinear regression model
  model:  Qe^2 ~ lgm(a, b, C0, m, V)^2
   data:  bois.DATA
   ab
225.6474   0.3568
 residual sum-of-squares: 9.98e+10

Number of iterations to convergence: 16
Achieved convergence tolerance: 6.096e-06
>
> # 3. Also using the reciprocal it converges
> # without derivatives
>
> nls(Qe ~ lgm(a, 1/b, C0, m, V), bois.DATA, start = c(a = 300, b = 1))
Nonlinear regression model
  model:  Qe ~ lgm(a, 1/b, C0, m, V)
   data:  bois.DATA
 a  b
337.75  25.88
 residual sum-of-squares: 15473

Number of iterations to convergence: 12
Achieved convergence tolerance: 1.722e-06
> transform(as.list(coef(.Last.value)), b = 1/b)
 a  b
1 337.7492 0.03863738
>


On Wed, Sep 3, 2008 at 10:36 AM, Gabor Grothendieck
<[EMAIL PROTECTED]> wrote:
> Try squaring both sides of the formula.
>
> On Wed, Sep 3, 2008 at 10:01 AM, Benoit Boulinguiez
> <[EMAIL PROTECTED]> wrote:
>> Hi,
>>
>> Parameters assessment in R with nls doesn't work, though it works fine with
>> MS Excel with the internal solver :(
>>
>>
>> I use nls in R to determine two parameters (a,b) from experimental data.
>>
>>m VC0 CeQe
>> 1  0.0911 0.0021740  3987.581   27.11637  94.51206
>> 2  0.0911 0.0021740  3987.581   27.41915  94.50484
>> 3  0.0911 0.0021740  3987.581   27.89362  94.49352
>> 4  0.0906 0.0021740  5981.370   82.98477 189.37739
>> 5  0.0906 0.0021740  5981.370   84.46435 189.34188
>> 6  0.0906 0.0021740  5981.370   85.33213 189.32106
>> 7  0.0911 0.0021740  7975.161  192.54276 233.30310
>> 8  0.0911 0.0021740  7975.161  196.52891 233.20797
>> 9  0.0911 0.0021740  7975.161  203.07467 233.05176
>> 10 0.0906 0.0021872  9968.951  357.49157 328.29824
>> 11 0.0906 0.0021872  9968.951  368.47609 328.03306
>> 12 0.0906 0.0021872  9968.951  379.18904 327.77444
>> 13 0.0904 0.0021740 13956.532 1382.61955 350.33391
>> 14 0.0904 0.0021740 13956.532 1389.64915 350.16485
>> 15 0.0904 0.0021740 13956.532 1411.87726 349.63030
>> 16 0.0902 0.0021740 15950.322 2592.90486 367.38460
>> 17 0.0902 0.0021740 15950.322 2606.34599 367.06064
>> 18 0.0902 0.0021740 15950.322 2639.54301 366.26053
>> 19 0.0906 0.0021872 17835.817 3894.12224 336.57036
>> 20 0.0906 0.0021872 17835.817 3950.35273 335.21289
>> 21 0.0906 0.0021872 17835.817 3972.29367 334.68320
>>
>> the model "LgmAltformula" is
>>
>> Qe ~ (V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 * C0 *
>>b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m * a *
>>b + (b * m * a)^2)^(1/2))/(2 * b * m)
>>
>> the command in R is
>>
>>
>> nls(formula=LgmAltFormula,data=bois.DATA,start=list(a=300,b=0.01),trace=TRUE
>> ,control=nls.control(minFactor=0.9))
>>
>> R has difficulties to converge and stops after the maximum of iterations
>>
>> 64650.47 :  2.945876e+02 3.837609e+08
>> 64650.45 :  2.945876e+02 4.022722e+09
>> 64650.45 :  2.945876e+02 1.695669e+09
>> 64650.45 :  2.945876e+02 5.103971e+08
>> 64650.44 :  2.945876e+02 8.497431e+08
>> 64650.41 :  2.945876e+02 1.515243e+09
>> 64650.36 :  2.945877e+02 5.482744e+09
>> 64650.36 :  2.945877e+02 2.152294e+09
>> 64650.36 :  2.945877e+02 7.953167e+08
>> 64650.35 :  2.945877e+02 7.62e+07
>> Erreur dans nls(formula = LgmAltFormula, data = bois.DATA, start = list(a =
>> 300,  :
>>  le nombre d'itérations a dépassé le maximum de 50
>>
>>
>> The parameters "a" and "b" are estimated to be 364 and 0.0126 with Excel
>> with the same data set.
>> I tried with the algorithm="port" with under and upper limits. One of the
>> parameter reaches the limit and the regression stops.
>>
>> How can I succeed with R to make this regression?
>>
>>
>> Regards/Cordialement
>>
>> -
>> Benoit Boulinguiez
>> Ph.D
>> Ecole de Chimie de Rennes (ENSCR) Bureau 1.20
>> Equipe CIP UMR CNRS 6226 "Sciences Chimiques de Rennes"
>> Campus de Beaulieu, 263 Avenue du Général Leclerc
>> 35700 Rennes, France
>>

[R] 3 Courses*** R/Splus Fundamentals and Programming Techniques: September 2008 at 3 locations by XLSolutions Corp

2008-09-04 Thread [EMAIL PROTECTED]
rXLSolutions Corporation (www.xlsolutions-corp.com) is proud to announce
our*** R/Splus Fundamentals and Programming Techniques***course at 3 USA
locations for September 2008.


 (1) R/Splus Fundamentals and Programming Techniques
http://www.xlsolutions-corp.com/Rfund.htm


* San Francisco  ** September 22-23, 2008
* Princeton, NJ  ** September 25-26, 2008
* Washington, DC ** September 29-30, 2008

Looking for  R/Splus Advanced Programming?  Please email  sue Turner: 
[EMAIL PROTECTED]


Ask for group discount and reserve your seat Now - Earlybird Rates.
Payment due after the class! Email Sue Turner:  [EMAIL PROTECTED]

 (1) R/Splus Fundamentals and Programming Techniques
http://www.xlsolutions-corp.com/Rfund.htm


(2) R/Splus Advanced Programming  
http://www.xlsolutions-corp.com/Radv.htm  

Email us for group discounts.
Email Sue Turner: [EMAIL PROTECTED]
Phone: 206-686-1578


Please let us know if you and your colleagues are interested in this
class to take advantage of group discount. Register now to secure your
seat!

Cheers,
Elvis Miller, PhD
Manager Training.
XLSolutions Corporation
206 686 1578
www.xlsolutions-corp.com
[EMAIL PROTECTED]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correct for heteroscedasticity using car package

2008-09-04 Thread Achim Zeileis

On Thu, 4 Sep 2008, Carrasco-Torrecilla, Roman R wrote:


Dear all,
Sorry if this is too obvious.
I am trying to fit my multiple regression model using lm()
Before starting model simplification using step() I checked whether the
model presented heteroscedasticity with ncv.test() from the CAR package.
It presents it.

I want to correct for it, I used hccm() from the CAR package as well and
got the Heteroscedasticity-Corrected Covariance Matrix.

I am not sure what am I supposed to do with the matrix. I guess I should
run my model again telling it to use that matrix but I don't really find
the parameter in lm() to tell R so. I guess it should be somewhere in
weights?


If you have a reasonable approximation of the pattern of 
heteroskedasticity, you can supply it in the "weights" argument to lm() 
and perform WLS.


hccm() on the other hand does not assume a particular pattern of 
heteroskedasticity (with the obvious advantages and disadvantages). You 
can easily employ it for inference based on Wald statistics. The 
"car" package provides linear.hypothesis() for this and the package 
"lmtest" provides functions coeftest() and waldtest().



I would really appracite if you could show me how I would do it or
recommend a text on how to correct heteroscedasticity with R.


The "sandwich" package which provides more flexible implementations of 
the estimators underlying hccm() as well as other estimators has

  vignette("sandwich", package = "sandwich")
with some background information and hands-on examples.

hth,
Z

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] A ternary graph's scales

2008-09-04 Thread Marcin Kozak
Hi all,

I am drawing a ternary graph. Everything is fine with both ternaryplot
(package vcd) and triangle.plot (package ade4), but I want to present
scales in neither percents nor from 0 to 1 (this is actually the only
option I found in both functions). I want the scales to be in a
natural scale (from 0 to k, k being the number of objects). Is it at
all possible? (Descriptions of both functions do not mention this
possibility.)

Thanks,
Marcin

-- 
"Build up your weaknesses until they become your strong points" -- Knute Rockne

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correct for heteroscedasticity using car package

2008-09-04 Thread John Fox
Dear Roman,

You can use the coefficient-covariance matrix returned by hccm() for
calculating "corrected" standard errors for the coefficients. Alternatively,
if you know the pattern of heteroscedasticity [as you probably do if you
used ncv.test()], you could try to correct for it by a transformation of the
response variable or by weighted-least-squares estimation.

I hope this helps,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
> Behalf Of Carrasco-Torrecilla, Roman R
> Sent: September-04-08 9:03 AM
> To: r-help@r-project.org
> Subject: [R] Correct for heteroscedasticity using car package
> 
> Dear all,
> Sorry if this is too obvious.
> I am trying to fit my multiple regression model using lm()
> Before starting model simplification using step() I checked whether the
> model presented heteroscedasticity with ncv.test() from the CAR package.
> It presents it.
> 
> I want to correct for it, I used hccm() from the CAR package as well and
> got the Heteroscedasticity-Corrected Covariance Matrix.
> 
> I am not sure what am I supposed to do with the matrix. I guess I should
> run my model again telling it to use that matrix but I don't really find
> the parameter in lm() to tell R so. I guess it should be somewhere in
> weights?
> 
> I would really appracite if you could show me how I would do it or
> recommend a text on how to correct heteroscedasticity with R.
> 
> Many thanks.
> 
> Roman Carrasco.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to reduce stress value in isoMDS?

2008-09-04 Thread stephen sefick
no I mean maybe use a higher dimension solution-  the other thing you
may wish to check out is package vegan which has an mds function and
very good viginettes, and is my favorite package for doing ordination
analysis (metaMDS uses isoMDS).
hope this helps

On Wed, Sep 3, 2008 at 9:03 PM, 陈武 <[EMAIL PROTECTED]> wrote:
> No, the two dimensions are the same. It's a dissimilarity matrix in the txt
> file.
> data[x][x]=0 data[x][y]=data[y][x] and 0<=data[x][y]<=1.
>
>
> 2008/9/3, stephen sefick <[EMAIL PROTECTED]>:
>>
>> different dimensions?
>>
>> On Wed, Sep 3, 2008 at 8:13 AM, 陈武 <[EMAIL PROTECTED]> wrote:
>> > haha...wrong code again, it's isoMDS not sammon in the 5th line.
>> > Thanks for Victor Lemes Landeiro's and Brian D. Ripley's advice.
>> >
>> >
>> > ÔÚ08-9-3£¬Prof Brian Ripley <[EMAIL PROTECTED]> дµÀ£º
>> >>
>> >> There is still no call to isoMDS in your code, and nothing we can
>> >> reproduce.
>> >>
>> >> It all depends on the dissimilarity matrix you have not given us: maybe
>> >> there is no good 2D representation of it.
>> >>
>> >> Looks like you need to ask a local expert about what you are doing, for
>> >> this is a statistical and not an R question.
>> >>
>> >>
>> >> On Wed, 3 Sep 2008, ³ÂÎä wrote:
>> >>
>> >> Sorry, wrong code. The right one here:
>> >>>
>> >>> library(MASS)
>> >>> cl<-read.table("e:/data.txt",header=T,sep=",")
>> >>> row.names(cl)<-colnames(cl)
>> >>> cm<-as.matrix(cl)
>> >>> loc<-sammon(cm)
>> >>> jpeg(filename="e:/plot.gif",width = 480, height = 480, units = "px",
>> >>> pointsize = 12, quality = 75, bg = "white", res = NA, restoreConsole =
>> >>> TRUE)
>> >>> plot(loc$points,type="p")
>> >>> text(loc$points,rownames(cl),cex=1,pos=1,offset=1)
>> >>> dev.off()
>> >>>
>> >>> And "e:/data.txt" contains a 40*40 dissimilarity matrix. Thanks for
>> >>> you
>> >>> advices!
>> >>>
>> >>>
>> >>> 2008/9/3,  <[EMAIL PROTECTED]>:
>> >>>
>> 
>>  I apply isoMDS to my data, but the result turns out to be bad as the
>>  stress
>>  value stays around 31! Yeah, 31 ,not 3.1... I don't know if I ignore
>>  something before recall isoMDS.
>>  My code as follow:
>> 
>>  m <- read.table("e:/tsdata.txt",header=T,sep=",")
>>  article_number <- ts(m, start = 2004,end=2008, frequency = 1
>>  ,names=colnames(m))
>>  jpeg(filename="e:/tsmap.gif",width = 480, height = 480, units = "px",
>>  pointsize = 12, quality = 75, bg = "white", res = NA, restoreConsole
>>  =
>>  TRUE)
>>  plot(article_number, plot.type="single",
>>  lty=c(1,1,1,1,1),col=c(1,2,3,4,5),las=1)
>>  max<-range(m)
>>  x<-c(2004,2004,2004,2004,2004)
>> 
>> 
>> 
>>  y<-c(max[2]*0.96,max[2]*0.9199,max[2]*0.88,max[2]*0.84,max[2]*0.7999)
>>  points(x,y,col=c(1,2,3,4,5),pch=15)
>>  text(x,y,colnames(m),pos=4,offset=0.4)
>>  dev.off()
>> 
>>  A 40*40 matrix in "e:/tsdata.txt". How should I do to improve the
>>  effect?
>>  Thank you!
>> 
>> 
>> >>>[[alternative HTML version deleted]]
>> >>>
>> >>
>> >> --
>> >> Brian D. Ripley,  [EMAIL PROTECTED]
>> >> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
>> >> University of Oxford, Tel:  +44 1865 272861 (self)
>> >> 1 South Parks Road, +44 1865 272866 (PA)
>> >> Oxford OX1 3TG, UKFax:  +44 1865 272595
>> >
>> >[[alternative HTML version deleted]]
>> >
>> >
>> > __
>> > R-help@r-project.org mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide
>> > http://www.R-project.org/posting-guide.html
>> > and provide commented, minimal, self-contained, reproducible code.
>> >
>> >
>>
>>
>>
>> --
>> Stephen Sefick
>> Research Scientist
>> Southeastern Natural Sciences Academy
>>
>> Let's not spend our time and resources thinking about things that are
>> so little or so large that all they really do for us is puff us up and
>> make us feel like gods. We are mammals, and have not exhausted the
>> annoying little problems of being mammals.
>>
>>-K. Mullis
>
>



-- 
Stephen Sefick
Research Scientist
Southeastern Natural Sciences Academy

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing rgl

2008-09-04 Thread Kevin E. Thorpe
Duncan Murdoch wrote:
> On 03/09/2008 4:34 PM, Kevin E. Thorpe wrote:
>> Hello.
>>
>> I'm having trouble installing rgl.  I have a theory as to the problem.
>> First, the error message and session info.
>>
>>> install.packages("rgl")
>> trying URL 'http://probability.ca/cran/src/contrib/rgl_0.81.tar.gz'
>> Content type 'application/x-gzip' length 1636939 bytes (1.6 Mb)
>> opened URL
>> ==
>> downloaded 1.6 Mb
>>
>> * Installing *source* package 'rgl' ...
>> checking for gcc... gcc -std=gnu99
>> checking for C compiler default output file name... a.out
>> checking whether the C compiler works... yes
>> checking whether we are cross compiling... no
>> checking for suffix of executables...
>> checking for suffix of object files... o
>> checking whether we are using the GNU C compiler... yes
>> checking whether gcc -std=gnu99 accepts -g... yes
>> checking for gcc -std=gnu99 option to accept ISO C89... none needed
>> checking how to run the C preprocessor... gcc -std=gnu99 -E
>> checking for gcc... (cached) gcc -std=gnu99
>> checking whether we are using the GNU C compiler... (cached) yes
>> checking whether gcc -std=gnu99 accepts -g... (cached) yes
>> checking for gcc -std=gnu99 option to accept ISO C89... (cached) none
>> needed
>> checking for libpng-config... yes
>> configure: using libpng-config
>> configure: using libpng dynamic linkage
>> checking for X... libraries , headers
>> checking GL/gl.h usability... yes
>> checking GL/gl.h presence... yes
>> checking for GL/gl.h... yes
>> checking GL/glu.h usability... yes
>> checking GL/glu.h presence... yes
>> checking for GL/glu.h... yes
>> checking for glEnd in -lGL... no
>> configure: error: missing required library GL
>> ERROR: configuration failed for package 'rgl'
>> ** Removing '/usr/local/lib/R/library/rgl'
>>
>> The downloaded packages are in
>> /tmp/RtmpZAeU9W/downloaded_packages
>> Updating HTML index of packages in '.Library'
>> Warning message:
>> In install.packages("rgl") :
>>   installation of package 'rgl' had non-zero exit status
>>> sessionInfo()
>> R version 2.7.2 Patched (2008-09-02 r46486)
>> i686-pc-linux-gnu
>>
>> locale:
>> C
>>
>> attached base packages:
>> [1] stats graphics  grDevices utils datasets  methods   base
>>
>> loaded via a namespace (and not attached):
>> [1] tools_2.7.2
>>
>> So, the error appears to be related to libGL (or libGLU?).  I am running
>> OpenSUSE 10.2.  I have the OpenGL packages installed.  I have the MESA
>> and MESA-devel packages installed which provide libGL.so.1 and
>> libGLU.so.1.  I also have the fglrx (proprietary ATI driver package
>> installed) which also provides libGL.so.1 (but no libGLU.so.1).  I
>> suspect that an incompatible libGL is being found (both versions appear
>> to be on the system in different directories).
>>
>> How do I confirm if this is the case and force a particular lib file
>> during package installation?
> 
> rgl is now hosted on R-forge, at
> https://r-forge.r-project.org/projects/rgl/.  You could post questions
> there, and be one of the first to do so :-).
> 
> But the general idea is probably to run the configure script manually,
> and look at the output.  So you would download and untar rgl (or just
> check it out using svn), then in the main directory, run ./configure.
> I think this will produce a config.log file that you can check for all
> the gory details.
> 
> Duncan Murdoch
> 

Thanks Duncan.  I followed your advice and ran the configure script
manually.  I looked at config.log which showed me where to look.  I
discovered that /usr/lib/libGL.so.1 was not there.  I think I know how
that happened, but I don't want to talk about it. :-)

I re-installed my MESA packages and everything worked.

Kevin

-- 
Kevin E. Thorpe
Biostatistician/Trialist, Knowledge Translation Program
Assistant Professor, Dalla Lana School of Public Health
University of Toronto
email: [EMAIL PROTECTED]  Tel: 416.864.5776  Fax: 416.864.6057

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Projecting Survival Curve into the Future

2008-09-04 Thread Prof Brian Ripley

On Thu, 4 Sep 2008, [EMAIL PROTECTED] wrote:


I have a survivor curve that shows account cancellations during the
past 3.5 months. Â Fortunately for our business, but unfortunately
for my analysis, the survivor curve doesn't yet pass through 50%.
 Is there a safe way to extend the survivor curve and estimate at
what time we'll hit 50%?


Without any example code it's hard to say, but take a look at
?predict.coxph and ?predict.survreg in the survival package.


You will not be able to do this with coxph: there will be no events to 
estimate the baseline hazard from.


Whether using a parametric accelerated life model (survreg) is 'safe' 
depends on what you are prepared to assume.  I'd say it was pretty dubious 
unless you have theoretical reasons to suppose that e.g. an exponential is 
a good approximation and it fits the data you do have.



Regards,
Richie.


--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding a legend to R graph device with several plots (no to individual plots!)

2008-09-04 Thread Duncan Murdoch

On 9/4/2008 9:35 AM, Nelson Villoria wrote:

Dear Users, I already posted this question: it either went unnoticed,
or it is to basic (if this is so, please sent me a hint).
I would like to know if there is a way to add a common
legend to an arrangement of plots. In the example below, I get four
plots in my device. each one has a density for 1995 and one for 2006.
I have found that using legend or smartlegend I can add a legend to
each plot, but I am looking for something in the spirit of mtext. That
is, putting the legend anywhere I want on the device. In my situation
I have:

par(mfrow=c(2,2), ann = FALSE)
p.names <- c("1","2","3","4")
lapply(p.names, function(x) {
 d1 <- density(rnorm(100,0,1))
 d2 <- density(rnorm(100,.3,1.2))
 plot(range(d1$x, d2$x), range(d1$y, d2$y), type = "n", xlab = NULL,
ylab = NULL)
  lines(d1, col = "black", lty = 2)
  lines(d2, col = "black", lty = 1)
})
mtext("Learning R the hard way", side = 1, line=-1.5, outer=TRUE)
smartlegend(x= "center", y="top", c("1995","2006"), lty=2:1)

the last line puts the legend on the lower-right plot. I'd like to put
it in the middle - or top of the device. Any suggestion?


The likely reason you didn't get a reply is because you used 
smartlegend(), which I believe comes from gplots.  This means only 
people who know gplots well will answer.


To do this using legend(), try this:

par(xpd=NA)
legend(locator(), c("1995","2006"), lty=2:1)

This will allow you to place the legend manually whereever you like on 
the page.  If you want to code a particular position for re-use later, 
run locator() on its own, and save the location.  (You will probably 
want to use grconvertX and grconvertY to change these into ndc 
coordinates.  For example, this looks okay to me:


par(xpd=NA)
legend(grconvertX(0.44, "ndc", "user"),
   grconvertY(0.55, "ndc", "user"),
   c("1995","2006"), lty=2:1)

You might want to wrap this in a nice "mlegend()" function to save some 
typing.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Projecting Survival Curve into the Future

2008-09-04 Thread Robert A LaBudde

You might consider a probit analysis using ln(Time) as the "dose".

At 09:24 AM 9/4/2008, [EMAIL PROTECTED] wrote:

> I have a survivor curve that shows account cancellations during the
> past 3.5 months. Â Fortunately for our business, but unfortunately
> for my analysis, the survivor curve doesn't yet pass through 50%.
> Â Is there a safe way to extend the survivor curve and estimate at
> what time we'll hit 50%?

Without any example code it's hard to say, but take a look at
?predict.coxph and ?predict.survreg in the survival package.

Regards,
Richie.

Mathematical Sciences Unit
HSL



Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: [EMAIL PROTECTED]
Least Cost Formulations, Ltd.URL: http://lcfltd.com/
824 Timberlake Drive Tel: 757-467-0954
Virginia Beach, VA 23464-3239Fax: 757-467-2947

"Vere scire est per causas scire"


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Adding a legend to R graph device with several plots (no to individual plots!)

2008-09-04 Thread Nelson Villoria
Dear Users, I already posted this question: it either went unnoticed,
or it is to basic (if this is so, please sent me a hint).
I would like to know if there is a way to add a common
legend to an arrangement of plots. In the example below, I get four
plots in my device. each one has a density for 1995 and one for 2006.
I have found that using legend or smartlegend I can add a legend to
each plot, but I am looking for something in the spirit of mtext. That
is, putting the legend anywhere I want on the device. In my situation
I have:

par(mfrow=c(2,2), ann = FALSE)
p.names <- c("1","2","3","4")
lapply(p.names, function(x) {
 d1 <- density(rnorm(100,0,1))
 d2 <- density(rnorm(100,.3,1.2))
 plot(range(d1$x, d2$x), range(d1$y, d2$y), type = "n", xlab = NULL,
ylab = NULL)
  lines(d1, col = "black", lty = 2)
  lines(d2, col = "black", lty = 1)
})
mtext("Learning R the hard way", side = 1, line=-1.5, outer=TRUE)
smartlegend(x= "center", y="top", c("1995","2006"), lty=2:1)

the last line puts the legend on the lower-right plot. I'd like to put
it in the middle - or top of the device. Any suggestion?

Thanks!

Nelson

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] printing name of object inside lapply

2008-09-04 Thread Steve Powell

Thanks Prof Ripley! How obvious in retrospect!

Prof Brian Ripley wrote:

On Thu, 4 Sep 2008, Steve Powell wrote:


Dear list members,
I am trying, within a lapply command, to print the name of the objects
in list or data frame. This is so that I can use odfWeave to print out a
report with a section for each object, including the object names.

I tried e.g.
a=b=c=1:5
lis=data.frame(a,b,c)
lapply(
lis, function (z) {
obj.nam <- deparse(substitute(z))
cat("some other text",obj.nam,"and so on","\n")
}
)


But instead of getting "a" "b" etc. I get X[[1L]] etc.

Any ideas?


Use a for() loop on the names: lapply is overkill here.  But you could 
use


lapply(names(lis), function (z) {
   cat("some other text", z, "and so on","\n")
   ## references to lis[[z]]
})





www.promente.org

proMENTE social research

Kran??evi??eva 35
71000 Sarajevo

mob. +387 61 215 997
tel. +387 556 865
fax. +387 556 866


[[alternative HTML version deleted]]






__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls convergence trouble

2008-09-04 Thread Christian Ritz
Hi Benoit,

another way of making Petr's point is by looking at the profile log likelihood 
function
for b; that is, only estimating the a parameter for a grid of b values:


## Defining mean function for fixed b
lgma <- function(b){

function(C0, m, V, a){ (V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 * 
C0 *
b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m * a *
b + (b * m * a)^2)^(1/2))/(2 * b * m)}

}

## Defining profile log likelihood function
logLikb <- function(b)
{
logLik(nls(Qe~(lgma(b))(C0, m, V, a),data = bois.DATA,start = list(a=300)))
}
logLikb2 <- Vectorize(logLikb, "b")  # vectorising the function

## Plotting the profile function
plot(x<-10^(seq(-4, 0, length.out=50)), logLikb2(x), type="l")


Essentially any b value from 0.2 upwards results in the same model fit.

Christian

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Projecting Survival Curve into the Future

2008-09-04 Thread Richard . Cotton
> I have a survivor curve that shows account cancellations during the 
> past 3.5 months. Â Fortunately for our business, but unfortunately 
> for my analysis, the survivor curve doesn't yet pass through 50%. 
> Â Is there a safe way to extend the survivor curve and estimate at 
> what time we'll hit 50%? 

Without any example code it's hard to say, but take a look at 
?predict.coxph and ?predict.survreg in the survival package.

Regards,
Richie.

Mathematical Sciences Unit
HSL


ATTENTION:

This message contains privileged and confidential inform...{{dropped:20}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Correct for heteroscedasticity using car package

2008-09-04 Thread Carrasco-Torrecilla, Roman R
Dear all,
Sorry if this is too obvious.
I am trying to fit my multiple regression model using lm()
Before starting model simplification using step() I checked whether the
model presented heteroscedasticity with ncv.test() from the CAR package.
It presents it.

I want to correct for it, I used hccm() from the CAR package as well and
got the Heteroscedasticity-Corrected Covariance Matrix.

I am not sure what am I supposed to do with the matrix. I guess I should
run my model again telling it to use that matrix but I don't really find
the parameter in lm() to tell R so. I guess it should be somewhere in
weights?

I would really appracite if you could show me how I would do it or
recommend a text on how to correct heteroscedasticity with R.

Many thanks.

Roman Carrasco.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] read.table error

2008-09-04 Thread Steve Murray

Thanks Prof. Ripley! I knew it would be something simple - I'd missed the "\t" 
from the read.table command! I won't be doing that again...!!

Thanks again,

Steve

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Projecting Survival Curve into the Future

2008-09-04 Thread Alan Cox
Hello, 
I have a survivor curve that shows account cancellations during the past 3.5 
months.  Fortunately for our business, but unfortunately for my analysis, the 
survivor curve doesn't yet pass through 50%.  Is there a safe way to extend 
the survivor curve and estimate at what time we'll hit 50%? 


We started a new program 3.5 months ago, and I believe that this set of 
accounts behaves differently than the rest of our company's accounts. 


Thanks very much, 
Alan 

-- 
Alan Cox 
Director, User Experience 
iContact, Corp. 
p 919.459.1038 f 919.287.2475 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] restricted bootstrap

2008-09-04 Thread Grant Gillis
Hello Professor Ripely,

Sorry for not being clear.  I posted after a long day of struggling.  Also
my toy distance matrix should have been symmetrical.

Simply put I have spatially autocorrelated data collected from many points.
I would like to do a linear regression on these data.  To deal with the
autocrrelation I want to resample a subset of my data with replacement but I
need to restrict subsets such that no two locations where data was collected
are closer than Xm apart (further apart than the autocrrelation in the
data).

Thanks for having a look at this for me.  I will look up the hard-core
spatial point process.

Grant

2008/9/4 Prof Brian Ripley <[EMAIL PROTECTED]>

> I see nothing here to do with the 'bootstrap', which is sampling with
> replacement.
>
> Do you know what you mean exactly by 'randomly sample'?  In general the way
> to so this is to sample randomly (uniformly, whatever) and reject samples
> that do not meet your restriction.   For some restrictions there are more
> efficient algorithms, but I don't understand yours.  (What are the 'rows'?
>  Do you want to sample rows in space or xy locations?  How come 'dist' is
> not symmetric?)  For some restrictions, an MCMC sampling scheme is needed,
> the hard-core spatial point process being a related example.
>
>
> On Wed, 3 Sep 2008, Grant Gillis wrote:
>
>  Hello List,
>>
>> I am not sure that I have the correct terminology here (restricted
>> bootstrap) which may be hampering my archive searches.  I have quite a
>> large
>> spatially autocorrelated data set.  I have xy coordinates and the
>> corresponding pairwise distance matrix (metres) for each row.  I would
>> like
>> to randomly sample some number of rows but restricting samples such that
>> the
>> distance between them is larger than the threshold of autocorrelation.  I
>> have been been unsuccessfully trying to link the 'sample' function to
>> values
>> in the distance matrix.
>>
>> My end goal is to randomly sample M thousand rows of data N thousand times
>> calculating linear regression coefficients for each sample but am stuck on
>> taking the initial sample. I believe I can figure out the rest.
>>
>>
>> Example Question
>>
>> I would like to radomly sample 3 rows further but withe the restriction
>> that
>> they are greater than 100m apart
>>
>> example data:
>> main data:
>>
>> y<- c(1, 2, 9, 5, 6)
>> x<-c( 1, 3, 5, 7, 9)
>> z<-c(2, 4, 6, 8, 10)
>> a<-c(3, 9, 6, 4 ,4)
>>
>> maindata<-cbind(y, x, z, a)
>>
>>y x x a
>> [1,] 1 1 1 3
>> [2,] 2 3 3 9
>> [3,] 9 5 5 6
>> [4,] 5 7 7 4
>> [5,] 6 9 9 4
>>
>> distance matrix:
>> row1<-c(0, 123, 567, 89)
>> row2<-c(98, 0, 345, 543)
>> row3<-c(765, 90, 0, 987)
>> row4<-c(654, 8, 99, 0)
>>
>> dist<-rbind(row1, row2, row3, row4)
>>
>>[,1] [,2] [,3] [,4]
>> row10  123  567   89
>> row2   980  345  543
>> row3  765   900  987
>> row4  6548   990
>>
>> Thanks for all of the help in the past and now
>>
>> Cheers
>> Grant
>>
>>[[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
> --
> Brian D. Ripley,  [EMAIL PROTECTED]
> Professor of Applied Statistics,  
> http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] read.table error

2008-09-04 Thread Prof Brian Ripley

On Thu, 4 Sep 2008, Steve Murray wrote:



Dear all,

I have a tab-delimited text (.txt) file which I'm trying to read into R. 
This file is of column format - there are in fact 3 columns and 259201 
rows (including the column headers). I've been using the following 
commands, but receive an error each time which prevents the data from 
being read in:


How about telling R it is tab-delimited?  Use read.delim or at least 
sep="\t".


Also, see the footer to this message: we are not clairvoyant and cannot 
see the file unless you show us part of it.


With a file of that size you should study the 'R Data Import/Export 
Manual' and take some steps to read it in efficiently (e.g. specify 
nrows and colClasses).



Jan <- read.table("JanuaryAvBurntArea.txt", header=TRUE)

Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,  :
 line 1 did not have 6 elements


I tried removing the 'header' argument, but receive a similar message:


Jan <- read.table("JanuaryAvBurntArea.txt")

Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,  :
 line 2 did not have 6 elements


What's more confusing about this is that I know that none of the lines 
have 6 elements! They're not supposed to! Each row only has 3 values 
(one per column)!


I suspect there is whitespace in the 'values'.



As a final resort I tried 'scan':

<- scan("JanuaryAvBurntArea.txt")
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,  :
 scan() expected 'a real', got 'Latitude'

...which is obviously something to do with there being a header as the first 
row, but the 'scan' command doesn't seem to have an equivalent of 'header=TRUE' 
like read.table...?


If anyone is able to shed some light on why I'm receiving these errors, and how 
I can get the data into R, then I'd be very grateful to hear them! I suspect 
I'm doing something very basic which is wrong!

Many thanks,

Steve




_
Win New York holidays with Kellogg’s & Live Search

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] read.table error

2008-09-04 Thread Steve Murray

Dear all,

I have a tab-delimited text (.txt) file which I'm trying to read into R. This 
file is of column format - there are in fact 3 columns and 259201 rows 
(including the column headers). I've been using the following commands, but 
receive an error each time which prevents the data from being read in:


> Jan <- read.table("JanuaryAvBurntArea.txt", header=TRUE)
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,  : 
  line 1 did not have 6 elements


I tried removing the 'header' argument, but receive a similar message:

> Jan <- read.table("JanuaryAvBurntArea.txt")
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,  : 
  line 2 did not have 6 elements


What's more confusing about this is that I know that none of the lines have 6 
elements! They're not supposed to! Each row only has 3 values (one per column)!

As a final resort I tried 'scan':

 <- scan("JanuaryAvBurntArea.txt")
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,  : 
  scan() expected 'a real', got 'Latitude'

...which is obviously something to do with there being a header as the first 
row, but the 'scan' command doesn't seem to have an equivalent of 'header=TRUE' 
like read.table...?


If anyone is able to shed some light on why I'm receiving these errors, and how 
I can get the data into R, then I'd be very grateful to hear them! I suspect 
I'm doing something very basic which is wrong!

Many thanks,

Steve




_
Win New York holidays with Kellogg’s & Live Search

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls convergence trouble

2008-09-04 Thread Petr PIKAL
Well, the model you try to fit is almost independent of "b"

try

plot(Qe,fff1(364,0.0126))
points(Qe,fff1(364,1), col=2)

So you can quite freely choose b without any substantial improvement to 
fit. I am not an expert in nonlinear fitting but seems to me that your 
data does not follow your model, so you need to change your model or 
change your data, depends on your preference.

Regards
Petr

[EMAIL PROTECTED] napsal dne 04.09.2008 12:11:27:

> Hi,
> 
> I agree with you that Excel is not the best tool for fittings, that's 
why I
> try to handle R.
> 
> But I need to use this specific model ("LgmAltFormula") and not a 
polynomial
> expression with the different parameters even if your method produced
> correct fitting.
> 
> The parameters "a" and "b" are the Langmuir parameters that describe the
> adsorption of a compound onto activated carbon. I need to assess these
> parameters.
> 
> 
> Regards/Cordialement
> 
> 
> Benoit Boulinguiez 
> 
> 
> -Message d'origine-
> De : Petr PIKAL [mailto:[EMAIL PROTECTED] 
> Envoyé : mercredi 3 septembre 2008 17:58
> À : Benoit Boulinguiez
> Cc : r-help@r-project.org
> Objet : Odp: [R] nls convergence trouble
> 
> Hi
> 
> Excel fit is not exceptionally good. Try
> 
>  fff<-function(a,b) (V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 * 
C0 
> *
> + b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m * a *
> + b + (b * m * a)^2)^(1/2))/(2 * b * m)
> 
> and with attached data frame
> 
> plot(Qe,fff(364,0.0126))
> abline(0,1)
> 
> you clearly see linear relationship in smaller values but quite chaotic 
> behaviour in bigger ones (or big deviation of experimental points from 
> your model).
> 
> So it is up to you if you want any fit (like from Excel) or only a good 
> one (like from R). 
> 
> Seems to me that simple linear could be quite a good choice although 
there 
> is some nelinearity.
> 
> fit<-lm(Qe~Ce+C0+V+m)
> summary(fit)
> 
> Call:
> lm(formula = Qe ~ Ce + C0 + V + m)
> 
> Residuals:
> Min  1Q  Median  3Q Max 
> -16.654  -8.653   2.426   9.971  11.912 
> 
> Coefficients:
>   Estimate Std. Error t value Pr(>|t|) 
> (Intercept) -8.148e+02  1.330e+03  -0.613 0.549254 
> Ce  -6.894e-02  4.982e-03 -13.839 6.02e-10 ***
> C0   3.284e-02  1.676e-03  19.589 4.26e-12 ***
> V2.153e+06  4.607e+05   4.674 0.000300 ***
> m   -4.272e+04  1.218e+04  -3.509 0.003167 ** 
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 
> 
> Residual standard error: 10.87 on 15 degrees of freedom
> Multiple R-squared: 0.9903, Adjusted R-squared: 0.9877 
> F-statistic: 381.3 on 4 and 15 DF,  p-value: 6.91e-15 
> 
> plot(predict(fit), Qe)
> abline(0,1)
> 
> Regards
> Petr
> 
> 
> [EMAIL PROTECTED] napsal dne 03.09.2008 16:01:36:
> 
> > Hi,
> > 
> > Parameters assessment in R with nls doesn't work, though it works fine 

> with
> > MS Excel with the internal solver :(
> > 
> > 
> > I use nls in R to determine two parameters (a,b) from experimental 
data. 
> 
> > 
> > m VC0 CeQe
> > 1  0.0911 0.0021740  3987.581   27.11637  94.51206
> > 2  0.0911 0.0021740  3987.581   27.41915  94.50484
> > 3  0.0911 0.0021740  3987.581   27.89362  94.49352
> > 4  0.0906 0.0021740  5981.370   82.98477 189.37739
> > 5  0.0906 0.0021740  5981.370   84.46435 189.34188
> > 6  0.0906 0.0021740  5981.370   85.33213 189.32106
> > 7  0.0911 0.0021740  7975.161  192.54276 233.30310
> > 8  0.0911 0.0021740  7975.161  196.52891 233.20797
> > 9  0.0911 0.0021740  7975.161  203.07467 233.05176
> > 10 0.0906 0.0021872  9968.951  357.49157 328.29824
> > 11 0.0906 0.0021872  9968.951  368.47609 328.03306
> > 12 0.0906 0.0021872  9968.951  379.18904 327.77444
> > 13 0.0904 0.0021740 13956.532 1382.61955 350.33391
> > 14 0.0904 0.0021740 13956.532 1389.64915 350.16485
> > 15 0.0904 0.0021740 13956.532 1411.87726 349.63030
> > 16 0.0902 0.0021740 15950.322 2592.90486 367.38460
> > 17 0.0902 0.0021740 15950.322 2606.34599 367.06064
> > 18 0.0902 0.0021740 15950.322 2639.54301 366.26053
> > 19 0.0906 0.0021872 17835.817 3894.12224 336.57036
> > 20 0.0906 0.0021872 17835.817 3950.35273 335.21289
> > 21 0.0906 0.0021872 17835.817 3972.29367 334.68320
> > 
> > the model "LgmAltformula" is
> > 
> > Qe ~ (V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 * C0 * 
> > b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m * a * 
> > b + (b * m * a)^2)^(1/2))/(2 * b * m)
> > 
> > the command in R is
> > 
> > 
> > 
> 
nls(formula=LgmAltFormula,data=bois.DATA,start=list(a=300,b=0.01),trace=TRUE
> > ,control=nls.control(minFactor=0.9))
> > 
> > R has difficulties to converge and stops after the maximum of 
iterations
> > 
> > 64650.47 :  2.945876e+02 3.837609e+08 
> > 64650.45 :  2.945876e+02 4.022722e+09 
> > 64650.45 :  2.945876e+02 1.695669e+09 
> > 64650.45 :  2.945876e+02 5.103971e+08 
> > 64650.44 :  2.945876e+02 8.497431e+08 
> > 64650.41 :  2.945876e+02 1.515243e+09 
> > 64650

Re: [R] printing name of object inside lapply

2008-09-04 Thread Prof Brian Ripley

On Thu, 4 Sep 2008, Steve Powell wrote:


Dear list members,
I am trying, within a lapply command, to print the name of the objects
in list or data frame. This is so that I can use odfWeave to print out a
report with a section for each object, including the object names.

I tried e.g.
a=b=c=1:5
lis=data.frame(a,b,c)
lapply(
lis, function (z) {
obj.nam <- deparse(substitute(z))
cat("some other text",obj.nam,"and so on","\n")
}
)


But instead of getting "a" "b" etc. I get X[[1L]] etc.

Any ideas?


Use a for() loop on the names: lapply is overkill here.  But you could use

lapply(names(lis), function (z) {
   cat("some other text", z, "and so on","\n")
   ## references to lis[[z]]
})





www.promente.org

proMENTE social research

Kran??evi??eva 35
71000 Sarajevo

mob. +387 61 215 997
tel. +387 556 865
fax. +387 556 866


[[alternative HTML version deleted]]




--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] printing name of object inside lapply

2008-09-04 Thread Steve Powell
Dear list members,
I am trying, within a lapply command, to print the name of the objects 
in list or data frame. This is so that I can use odfWeave to print out a 
report with a section for each object, including the object names.

I tried e.g.
a=b=c=1:5
lis=data.frame(a,b,c)
lapply(
lis, function (z) {
obj.nam <- deparse(substitute(z))
cat("some other text",obj.nam,"and so on","\n")
}
)


But instead of getting "a" "b" etc. I get X[[1L]] etc.

Any ideas?


www.promente.org

proMENTE social research

Krančevićeva 35
71000 Sarajevo

mob. +387 61 215 997
tel. +387 556 865
fax. +387 556 866


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls convergence trouble

2008-09-04 Thread Benoit Boulinguiez
Hi,

I agree with you that Excel is not the best tool for fittings, that's why I
try to handle R.

But I need to use this specific model ("LgmAltFormula") and not a polynomial
expression with the different parameters even if your method produced
correct fitting.

The parameters "a" and "b" are the Langmuir parameters that describe the
adsorption of a compound onto activated carbon. I need to assess these
parameters.


Regards/Cordialement


Benoit Boulinguiez 


-Message d'origine-
De : Petr PIKAL [mailto:[EMAIL PROTECTED] 
Envoyé : mercredi 3 septembre 2008 17:58
À : Benoit Boulinguiez
Cc : r-help@r-project.org
Objet : Odp: [R] nls convergence trouble

Hi

Excel fit is not exceptionally good. Try

 fff<-function(a,b) (V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 * C0 
*
+ b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m * a *
+ b + (b * m * a)^2)^(1/2))/(2 * b * m)

and with attached data frame

plot(Qe,fff(364,0.0126))
abline(0,1)

you clearly see linear relationship in smaller values but quite chaotic 
behaviour in bigger ones (or big deviation of experimental points from 
your model).

So it is up to you if you want any fit (like from Excel) or only a good 
one (like from R). 

Seems to me that simple linear could be quite a good choice although there 
is some nelinearity.

fit<-lm(Qe~Ce+C0+V+m)
summary(fit)

Call:
lm(formula = Qe ~ Ce + C0 + V + m)

Residuals:
Min  1Q  Median  3Q Max 
-16.654  -8.653   2.426   9.971  11.912 

Coefficients:
  Estimate Std. Error t value Pr(>|t|) 
(Intercept) -8.148e+02  1.330e+03  -0.613 0.549254 
Ce  -6.894e-02  4.982e-03 -13.839 6.02e-10 ***
C0   3.284e-02  1.676e-03  19.589 4.26e-12 ***
V2.153e+06  4.607e+05   4.674 0.000300 ***
m   -4.272e+04  1.218e+04  -3.509 0.003167 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 10.87 on 15 degrees of freedom
Multiple R-squared: 0.9903, Adjusted R-squared: 0.9877 
F-statistic: 381.3 on 4 and 15 DF,  p-value: 6.91e-15 

plot(predict(fit), Qe)
abline(0,1)

Regards
Petr


[EMAIL PROTECTED] napsal dne 03.09.2008 16:01:36:

> Hi,
> 
> Parameters assessment in R with nls doesn't work, though it works fine 
with
> MS Excel with the internal solver :(
> 
> 
> I use nls in R to determine two parameters (a,b) from experimental data. 

> 
> m VC0 CeQe
> 1  0.0911 0.0021740  3987.581   27.11637  94.51206
> 2  0.0911 0.0021740  3987.581   27.41915  94.50484
> 3  0.0911 0.0021740  3987.581   27.89362  94.49352
> 4  0.0906 0.0021740  5981.370   82.98477 189.37739
> 5  0.0906 0.0021740  5981.370   84.46435 189.34188
> 6  0.0906 0.0021740  5981.370   85.33213 189.32106
> 7  0.0911 0.0021740  7975.161  192.54276 233.30310
> 8  0.0911 0.0021740  7975.161  196.52891 233.20797
> 9  0.0911 0.0021740  7975.161  203.07467 233.05176
> 10 0.0906 0.0021872  9968.951  357.49157 328.29824
> 11 0.0906 0.0021872  9968.951  368.47609 328.03306
> 12 0.0906 0.0021872  9968.951  379.18904 327.77444
> 13 0.0904 0.0021740 13956.532 1382.61955 350.33391
> 14 0.0904 0.0021740 13956.532 1389.64915 350.16485
> 15 0.0904 0.0021740 13956.532 1411.87726 349.63030
> 16 0.0902 0.0021740 15950.322 2592.90486 367.38460
> 17 0.0902 0.0021740 15950.322 2606.34599 367.06064
> 18 0.0902 0.0021740 15950.322 2639.54301 366.26053
> 19 0.0906 0.0021872 17835.817 3894.12224 336.57036
> 20 0.0906 0.0021872 17835.817 3950.35273 335.21289
> 21 0.0906 0.0021872 17835.817 3972.29367 334.68320
> 
> the model "LgmAltformula" is
> 
> Qe ~ (V + b * m * a + C0 * V * b - ((C0 * V * b)^2 + 2 * C0 * 
> b * V^2 - 2 * C0 * V * m * a * b^2 + V^2 + 2 * V * m * a * 
> b + (b * m * a)^2)^(1/2))/(2 * b * m)
> 
> the command in R is
> 
> 
> 
nls(formula=LgmAltFormula,data=bois.DATA,start=list(a=300,b=0.01),trace=TRUE
> ,control=nls.control(minFactor=0.9))
> 
> R has difficulties to converge and stops after the maximum of iterations
> 
> 64650.47 :  2.945876e+02 3.837609e+08 
> 64650.45 :  2.945876e+02 4.022722e+09 
> 64650.45 :  2.945876e+02 1.695669e+09 
> 64650.45 :  2.945876e+02 5.103971e+08 
> 64650.44 :  2.945876e+02 8.497431e+08 
> 64650.41 :  2.945876e+02 1.515243e+09 
> 64650.36 :  2.945877e+02 5.482744e+09 
> 64650.36 :  2.945877e+02 2.152294e+09 
> 64650.36 :  2.945877e+02 7.953167e+08 
> 64650.35 :  2.945877e+02 7.62e+07 
> Erreur dans nls(formula = LgmAltFormula, data = bois.DATA, start = 
list(a =
> 300,  : 
>   le nombre d'itérations a dépassé le maximum de 50
> 
> 
> The parameters "a" and "b" are estimated to be 364 and 0.0126 with Excel
> with the same data set.
> I tried with the algorithm="port" with under and upper limits. One of 
the
> parameter reaches the limit and the regression stops.
> 
> How can I succeed with R to make this regression? 
> 
> 
> Regards/Cordialement
> 
> -
> Benoit Boulinguiez
> Ph.D
> Ecole de Chimie de Rennes (ENSCR) Bureau 1.20
> Equipe CIP UMR CN

  1   2   >