Re: [R] How to increase memory for R on Soliars 10 with 16GB and 64bit R

2005-07-13 Thread Prof Brian Ripley
On Tue, 12 Jul 2005, Dongseok Choi wrote:

>  My machine is SUN Java Workstation 2100 with 2 AMD Opteron CPUs and 16GB RAM.
>  R is compiled as 64bit by using SUN compilers.
>  I trying to fit quantile smoothing on my data and I got an message as below.
>
>> fit1<-rqss(z1~qss(cbind(x,y),lambda=la1),tau=t1)
> Error in as.matrix.csr(diag(n)) :
cannot allocate memory block of size 2496135168
>
>  The lengths of vector x and y are both 17664.
>  I tried and found that the same command ran with x[1:16008] and y[1:16008].
>  So, it looks to me a memory related problem, but I'm not sure how I can 
> allocate memory block.
>   I read the command line option but not sure what do to with it.
>   Could you help me on this?

It is trying to allocate a single memory block of size over 2^31-1 bytes. 
R internally uses ints for sizes of vectors and that is a limit (see 
help("Memory-limits") ).  However, it is intended that on 64-bit systems 
that there is a limit here of 8*(2^31-1) but there was a typo.  Please 
change line 1534 of src/main/memory.c to

#if SIZEOF_LONG > 4

and re-compile.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] exact values for p-values

2005-07-13 Thread David Duffy
> This is obtained from F =39540 with df1 = 1, df2 = 7025.
> Suppose am interested in exact value such as
>

If it were really necessary, you would have to move to multiple
precision.  The gmp R package doesn't seem to yet cover this, but FMLIB
(TOMS814, DM Smith) is a multiple precision f90 library that does
include the incomplete beta -- it allows one to say for F(1,7025)=39540,
P=6.31E-2886 (evaluated using 200 sign. digit arithmetic).  Results from
R's pf() agree quite closely with the FMLIB results for less extreme values
eg
> print(pf(1500,1,7025,lower=FALSE), digits=20)
 [1] 1.3702710894887480597e-297

cf   1.37027108948832580215549799419452388134616261215463681945E-297


| David Duffy (MBBS PhD) ,-_|\
| email: [EMAIL PROTECTED]  ph: INT+61+7+3362-0217 fax: -0101  / *
| Epidemiology Unit, Queensland Institute of Medical Research   \_,-._/
| 300 Herston Rd, Brisbane, Queensland 4029, Australia  GPG 4D0B994A v

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Boxcox transformation / homogeneity of variances

2005-07-13 Thread Arnaud Dowkiw
Dear r-helpers,

Prior to analysis of variance, I ran the Boxcox function (MASS library) to 
find the best power transformation of my data. However, reading the Boxcox 
help file, I cannot figure out if this function (through its associated 
log-likelihood function) corrects for * normality only * or if it also 
induces * homogeneity of variances *. I found in Biometry (Sokal and Rohlf, 
p. 419) that the box-cox transformation can be extended to induce 
homogenity of variances in conjunction with Bartlett's test of homogeneity 
of variances. Does the Boxcox function implemented in R refer to this 
extension ?
Thanks a lot,


- - - - - - - - - - - - - - - - - - - - - - -
Arnaud DOWKIW
INRA
Forest Research
Avenue de la Pomme de Pin
BP 20619 ARDON
45166 OLIVET CEDEX
FRANCE
Tel. + 33 2 38 41 78 00
Fax. + 33 2 38 41 48 09
- - - - - - - - - - - - - - - - - - - - - - -

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use the function "plot" as Matlab?

2005-07-13 Thread Ted Harding
On 13-Jul-05 klebyn wrote:
> Hello,
> 
> How to use the function plot to produce graphs as Matlab?
> example in Matlab:
> 
> a = [1,2,5,3,6,8,1,7];
> b = [1,7,2,9,2,3,4,5];
> plot(a,'b')
> hold
> plot(b,'r')
> 
> 
> How to make the same in R-package ?
> 
> I am trying something thus:
> 
> a <- c(1,2,5,3,6,8,1,7)
> c(1,7,2,9,2,3,4,5) -> b
> 
> a;b
> 
> plot(a,t="l",col="blue")
> plot(b,t="l",col="red")

Although this is an over-worked query -- for which an answer, given
that t="l" has been specified, is to use

  plot(a,t="l",col="blue",ylim=c(0,10))
  lines(b,t="l",col="red")

there is a more interesting issue associated with it (given that
Klebyn has come to it from a Matlab perspective).

It's a long time since I used real Matlab, but I'll illustrate
with octave which, in this respect, should be identical to Matlab.

Octave:

octave:1> x = 0.1*(0:20);
octave:2> plot(x,sin(x))

produces a graph of sin(x) with the y-axis scaled from 0 to 1.0
Next:

octave:3> hold on
octave:4> plot(x,1.5*cos(x))

superimposes a graph of 1.5*cos(x) with the y-axis automatically
re-scaled from -1 to 1.5.

This would not have happened in R with

  x = 0.1*(0:20);
  plot(x,sin(x))
  lines(x,1.5*cos(x))

where the 0 to 1.0 scaling of the first plot would be kept for
the second, in which therefore part of the additional graph of
1.5*cos(x) would be "outside the box".

No doubt like many others, I've been caught on the wrong foot
by this more than a few times. The solution, of course (as
illustrated in the reply to Klebyn above) is to anticipate
what scaling you will need for all the graphs you intend to
put on the same plot, and set up the scalings at the time
of the first one using the options "xlim" and "ylim", e.g.:

  x = 0.1*(0:20);
  plot(x,sin(x),ylim=c(-1,1.5))
  lines(x,1.5*cos(x))

This is not always feasible, and indeed should not be expected
to be feasible since part of the reason for using software
like R in the first place is to compute what you do not know!

Indeed, R will not allow you to use "xlim" or "ylim" once the
first plot has been drawn.

So in such cases I end up making a note (either on paper or,
when I do really serious planning, in auxiliary variables)
of the min's and max's for each graph, and then re-run the
plotting commands with appropriate "xlim" and "ylim" scaling
set up in the first plot so as to include all the subsequent
graphs in entirety. (Even this strategy can be defeated if
the succesive graphs represent simulations of long-tailed
distributions. Unless of course I'm sufficiently alert to
set the RNG seed first as well ... )

I'm not sufficiently acquainted with the internals of "plot"
and friends to anticipate the answer to this question; but,
anyway, the question is:

  Is it feasible to include, as a parameter to "plot", "lines"
  and "points",

rescale=FALSE

  where this default value would maintain the existing behaviour
  of these functions, while setting

rescale=TRUE

  would allow each succeeding plot, adding graphs using "points"
  or "lines", to be rescaled (as in Matlab/Octave) so as to
  include the entirety of each successive graph?

Best wishes to all,
Ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 13-Jul-05   Time: 09:12:34
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use the function "plot" as Matlab?

2005-07-13 Thread Robin Hankin
Hi

Ted makes a good point... matlab can dynamically rescale a plot in  
response
to plot(...,add=TRUE) statements.

For some reason which I do not understand, the rescaling issue is  
only a problem
for me when working in "matlab mode".  It's not an issue when working  
in "R mode"

Ted pointed out that the following does not behave as intended:


>   x = 0.1*(0:20);
>   plot(x,sin(x))
>   lines(x,1.5*cos(x))


and presented an alternative method in which ylim was set by hand.  I  
would suggest:

x <- 0.1*(0:20)
y1 <- sin(x)
y2 <- 1.5*cos(x)

plot(c(x,x),c(y1,y2),type="n")
lines(x,y1)
lines(x,y2)

because this way, the axes are set by the plot() statement, but  
nothing is plotted.


best wishes

rksh





On 13 Jul 2005, at 09:12, (Ted Harding) wrote:
>>
>
> Although this is an over-worked query -- for which an answer, given
> that t="l" has been specified, is to use
>
>   plot(a,t="l",col="blue",ylim=c(0,10))
>   lines(b,t="l",col="red")
>
> there is a more interesting issue associated with it (given that
> Klebyn has come to it from a Matlab perspective).
>
> It's a long time since I used real Matlab, but I'll illustrate
> with octave which, in this respect, should be identical to Matlab.
>
> Octave:
>
> octave:1> x = 0.1*(0:20);
> octave:2> plot(x,sin(x))
>
> produces a graph of sin(x) with the y-axis scaled from 0 to 1.0
> Next:
>
> octave:3> hold on
> octave:4> plot(x,1.5*cos(x))
>
> superimposes a graph of 1.5*cos(x) with the y-axis automatically
> re-scaled from -1 to 1.5.
>
> This would not have happened in R with
>
>   x = 0.1*(0:20);
>   plot(x,sin(x))
>   lines(x,1.5*cos(x))
>
> where the 0 to 1.0 scaling of the first plot would be kept for
> the second, in which therefore part of the additional graph of
> 1.5*cos(x) would be "outside the box".
>
> No doubt like many others, I've been caught on the wrong foot
> by this more than a few times. The solution, of course (as
> illustrated in the reply to Klebyn above) is to anticipate
> what scaling you will need for all the graphs you intend to
> put on the same plot, and set up the scalings at the time
> of the first one using the options "xlim" and "ylim", e.g.:
>
>   x = 0.1*(0:20);
>   plot(x,sin(x),ylim=c(-1,1.5))
>   lines(x,1.5*cos(x))
>
> This is not always feasible, and indeed should not be expected
> to be feasible since part of the reason for using software
> like R in the first place is to compute what you do not know!
>
> Indeed, R will not allow you to use "xlim" or "ylim" once the
> first plot has been drawn.
>
> So in such cases I end up making a note (either on paper or,
> when I do really serious planning, in auxiliary variables)
> of the min's and max's for each graph, and then re-run the
> plotting commands with appropriate "xlim" and "ylim" scaling
> set up in the first plot so as to include all the subsequent
> graphs in entirety. (Even this strategy can be defeated if
> the succesive graphs represent simulations of long-tailed
> distributions. Unless of course I'm sufficiently alert to
> set the RNG seed first as well ... )
>
> I'm not sufficiently acquainted with the internals of "plot"
> and friends to anticipate the answer to this question; but,
> anyway, the question is:
>
>   Is it feasible to include, as a parameter to "plot", "lines"
>   and "points",
>
> rescale=FALSE
>
>   where this default value would maintain the existing behaviour
>   of these functions, while setting
>
> rescale=TRUE
>
>   would allow each succeeding plot, adding graphs using "points"
>   or "lines", to be rescaled (as in Matlab/Octave) so as to
>   include the entirety of each successive graph?
>
> Best wishes to all,
> Ted.
>
>
> 
> E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
> Fax-to-email: +44 (0)870 094 0861
> Date: 13-Jul-05   Time: 09:12:34
> -- XFMail --
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting- 
> guide.html
>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Boxcox transformation / homogeneity of variances

2005-07-13 Thread Prof Brian Ripley
Please consult the reference on the help page of that function: it _is_ 
support software for a book.  It implements the Box-Cox procedure (as it 
says).  The original Box-Cox paper has three aims, two of which you have 
mentioned (but perhaps the most inportant one is the one you 
have not mentioned, additivity).

It is probably worth stressing that the Box-Cox procedure is about finding 
the best transformation within a specific family for fitting a particular 
_model_ to a set of data, not for the data per se.  There is a long 
history of people using an inappropriate model and finding an 
uninterpretable transformation.

On Wed, 13 Jul 2005, Arnaud Dowkiw wrote:

> Prior to analysis of variance, I ran the Boxcox function (MASS library) to
> find the best power transformation of my data. However, reading the Boxcox
> help file, I cannot figure out if this function (through its associated
> log-likelihood function) corrects for * normality only * or if it also
> induces * homogeneity of variances *. I found in Biometry (Sokal and Rohlf,
> p. 419) that the box-cox transformation can be extended to induce
> homogenity of variances in conjunction with Bartlett's test of homogeneity
> of variances. Does the Boxcox function implemented in R refer to this
> extension ?

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use the function "plot" as Matlab

2005-07-13 Thread Prof Brian Ripley
For most purposes it is easiest to use matplot() to plot superimposed 
plots like this.  E.g.

x <- 0.1*(0:20)
matplot(x, cbind(sin(x), cos(x)), "pl", pch=1)


On Wed, 13 Jul 2005, Robin Hankin wrote:

> Hi
>
> Ted makes a good point... matlab can dynamically rescale a plot in
> response
> to plot(...,add=TRUE) statements.
>
> For some reason which I do not understand, the rescaling issue is
> only a problem
> for me when working in "matlab mode".  It's not an issue when working
> in "R mode"
>
> Ted pointed out that the following does not behave as intended:
>
>
>>   x = 0.1*(0:20);
>>   plot(x,sin(x))
>>   lines(x,1.5*cos(x))
>
>
> and presented an alternative method in which ylim was set by hand.  I
> would suggest:
>
> x <- 0.1*(0:20)
> y1 <- sin(x)
> y2 <- 1.5*cos(x)
>
> plot(c(x,x),c(y1,y2),type="n")
> lines(x,y1)
> lines(x,y2)
>
> because this way, the axes are set by the plot() statement, but
> nothing is plotted.
>
>
> best wishes
>
> rksh
>
>
>
>
>
> On 13 Jul 2005, at 09:12, (Ted Harding) wrote:
>>>
>>
>> Although this is an over-worked query -- for which an answer, given
>> that t="l" has been specified, is to use
>>
>>   plot(a,t="l",col="blue",ylim=c(0,10))
>>   lines(b,t="l",col="red")
>>
>> there is a more interesting issue associated with it (given that
>> Klebyn has come to it from a Matlab perspective).
>>
>> It's a long time since I used real Matlab, but I'll illustrate
>> with octave which, in this respect, should be identical to Matlab.
>>
>> Octave:
>>
>> octave:1> x = 0.1*(0:20);
>> octave:2> plot(x,sin(x))
>>
>> produces a graph of sin(x) with the y-axis scaled from 0 to 1.0
>> Next:
>>
>> octave:3> hold on
>> octave:4> plot(x,1.5*cos(x))
>>
>> superimposes a graph of 1.5*cos(x) with the y-axis automatically
>> re-scaled from -1 to 1.5.
>>
>> This would not have happened in R with
>>
>>   x = 0.1*(0:20);
>>   plot(x,sin(x))
>>   lines(x,1.5*cos(x))
>>
>> where the 0 to 1.0 scaling of the first plot would be kept for
>> the second, in which therefore part of the additional graph of
>> 1.5*cos(x) would be "outside the box".
>>
>> No doubt like many others, I've been caught on the wrong foot
>> by this more than a few times. The solution, of course (as
>> illustrated in the reply to Klebyn above) is to anticipate
>> what scaling you will need for all the graphs you intend to
>> put on the same plot, and set up the scalings at the time
>> of the first one using the options "xlim" and "ylim", e.g.:
>>
>>   x = 0.1*(0:20);
>>   plot(x,sin(x),ylim=c(-1,1.5))
>>   lines(x,1.5*cos(x))
>>
>> This is not always feasible, and indeed should not be expected
>> to be feasible since part of the reason for using software
>> like R in the first place is to compute what you do not know!
>>
>> Indeed, R will not allow you to use "xlim" or "ylim" once the
>> first plot has been drawn.
>>
>> So in such cases I end up making a note (either on paper or,
>> when I do really serious planning, in auxiliary variables)
>> of the min's and max's for each graph, and then re-run the
>> plotting commands with appropriate "xlim" and "ylim" scaling
>> set up in the first plot so as to include all the subsequent
>> graphs in entirety. (Even this strategy can be defeated if
>> the succesive graphs represent simulations of long-tailed
>> distributions. Unless of course I'm sufficiently alert to
>> set the RNG seed first as well ... )
>>
>> I'm not sufficiently acquainted with the internals of "plot"
>> and friends to anticipate the answer to this question; but,
>> anyway, the question is:
>>
>>   Is it feasible to include, as a parameter to "plot", "lines"
>>   and "points",
>>
>> rescale=FALSE
>>
>>   where this default value would maintain the existing behaviour
>>   of these functions, while setting
>>
>> rescale=TRUE
>>
>>   would allow each succeeding plot, adding graphs using "points"
>>   or "lines", to be rescaled (as in Matlab/Octave) so as to
>>   include the entirety of each successive graph?
>>
>> Best wishes to all,
>> Ted.
>>
>>
>> 
>> E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
>> Fax-to-email: +44 (0)870 094 0861
>> Date: 13-Jul-05   Time: 09:12:34
>> -- XFMail --
>>
>> __
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide! http://www.R-project.org/posting-
>> guide.html
>>
>
> --
> Robin Hankin
> Uncertainty Analyst
> National Oceanography Centre, Southampton
> European Way, Southampton SO14 3ZH, UK
>  tel  023-8059-7743
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics

[R] exact values for p-values

2005-07-13 Thread S.O. Nyangoma
Hi David, Since I am looking at very extreme values, it appears I will 
need FMLIB. Is it an R lib? if so which version? How/where can I 
download it?

Regards.


- Original Message -
From: David Duffy <[EMAIL PROTECTED]>
Date: Wednesday, July 13, 2005 9:46 am
Subject: [R]  exact values for p-values

> > This is obtained from F =39540 with df1 = 1, df2 = 7025.
> > Suppose am interested in exact value such as
> >
> 
> If it were really necessary, you would have to move to multiple
> precision.  The gmp R package doesn't seem to yet cover this, but 
> FMLIB(TOMS814, DM Smith) is a multiple precision f90 library that 
does
> include the incomplete beta -- it allows one to say for 
> F(1,7025)=39540,P=6.31E-2886 (evaluated using 200 sign. digit 
> arithmetic).  Results from
> R's pf() agree quite closely with the FMLIB results for less 
> extreme values
> eg
> > print(pf(1500,1,7025,lower=FALSE), digits=20)
> [1] 1.3702710894887480597e-297
> 
> cf   1.37027108948832580215549799419452388134616261215463681945E-297
> 
> 
> | David Duffy (MBBS PhD) ,-
_|\
> | email: [EMAIL PROTECTED]  ph: INT+61+7+3362-0217 fax: -0101  /  
>   *
> | Epidemiology Unit, Queensland Institute of Medical Research   
> \_,-._/
> | 300 Herston Rd, Brisbane, Queensland 4029, Australia  GPG 
> 4D0B994A v
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-
> guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Kronecker matrix product

2005-07-13 Thread Robin Hankin
Hi

I want to write a little function that takes a matrix X of size
m-by-n, and a list L of length  "m",  whose elements are matrices all  
of which have
the same number of columns but possibly a different number of rows.

I then want to get a sort of dumbed-down kronecker product in which
X[i,j] is replaced by X[i,j]*L[[j]]

where L[[j]] is the j-th of the "m" matrices.  For example, if

X = matrix(c(1,5,0,2),2,2)

and

L[[1]] = matrix(1:4,2,2)
L[[2]] = matrix(c(1,1,1,1,1,10),ncol=2)

I want


  [,1] [,2] [,3] [,4]
[1,]1300
[2,]2400
[3,]5522
[4,]5522
[5,]5   502   20
 >


see how, for example, out[3:5,1:2]  == 5*L[[2]], the "5" coming from X 
[2,1].

[
I can bind L together into a single matrix with

do.call("rbind",L)

and calculate the number of rows with

sapply(L,nrow)

but I don't see how this can help.
]






--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] delete a row from a matrix

2005-07-13 Thread Navarre Sabine
Hi,
I would like to know if it's possible to delete a rox from a matrix?
 
> fig
 [,1] [,2] [,3] [,4]
[1,]01  0.0  0.2
[2,]01  0.2  0.8
[3,]01  0.8  1.0
[4,]01   NA   NA
[5,]01   NA   NA

I would like to delete the 2 rows with NA!

Thanks
 
Sabine


-


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Kronecker matrix product

2005-07-13 Thread Berwin A Turlach
> "RH" == Robin Hankin <[EMAIL PROTECTED]> writes:

RH> I want to write a little function that takes a matrix X of
RH> size m-by-n, and a list L of length "m", whose elements are
RH> matrices all of which have the same number of columns but
RH> possibly a different number of rows.

RH> I then want to get a sort of dumbed-down kronecker product in which
RH> X[i,j] is replaced by X[i,j]*L[[j]]

RH> where L[[j]] is the j-th of the "m" matrices.  For example, if

RH> X = matrix(c(1,5,0,2),2,2)

RH> and

RH> L[[1]] = matrix(1:4,2,2)
RH> L[[2]] = matrix(c(1,1,1,1,1,10),ncol=2)

RH> I want


RH> [,1] [,2] [,3] [,4]
RH> [1,]1300
RH> [2,]2400
RH> [3,]5522
RH> [4,]5522
RH> [5,]5   502   20

> tmp <- sapply(1:length(L), function(j, mat, list) kronecker(X[j,,drop=FALSE], 
> L[[j]]), mat=X, list=L)
> do.call("rbind", tmp)
 [,1] [,2] [,3] [,4]
[1,]1300
[2,]2400
[3,]5522
[4,]5522
[5,]5   502   20
> 

HTH.

Cheers,

Berwin

== Full address 
Berwin A Turlach  Tel.: +61 (8) 6488 3338 (secr)   
School of Mathematics and Statistics+61 (8) 6488 3383 (self)  
The University of Western Australia   FAX : +61 (8) 6488 1028
35 Stirling Highway   
Crawley WA 6009e-mail: [EMAIL PROTECTED]
Australiahttp://www.maths.uwa.edu.au/~berwin

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] delete a row from a matrix

2005-07-13 Thread Peter Dalgaard
Navarre Sabine <[EMAIL PROTECTED]> writes:

> Hi,
> I would like to know if it's possible to delete a rox from a matrix?
>  
> > fig
>  [,1] [,2] [,3] [,4]
> [1,]01  0.0  0.2
> [2,]01  0.2  0.8
> [3,]01  0.8  1.0
> [4,]01   NA   NA
> [5,]01   NA   NA
> 
> I would like to delete the 2 rows with NA!

fig <- fig[-c(4,5),]

or, more generally

fig <- fig[complete.cases(fig),] 

or, even more generally

fig <- fig[!apply(is.na(fig), 1, any),]

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] nlme, MASS and geoRglm for spatial autocorrelation?

2005-07-13 Thread Beale, Colin
Hi.

I'm trying to perform what should be a reasonably basic analysis of some
spatial presence/absence data but am somewhat overwhelmed by the options
available and could do with a helpful pointer. My researches so far
indicate that if my data were normal, I would simply use gls() (in nlme)
and one of the various corSpatial functions (eg. corSpher() to be
analagous to similar analysis in SAS) with form = ~ x+y (and a nugget if
appropriate). However, my data are binomial, so I need a different
approach. Using various packages I could define a mixed model (eg using
glmmPQL() in MASS) with similar correlation structure, but I seem to
need to define a random effect to use glmmPQL(), and I don't have any.
Could this requirement be switched off and still use the mixed model
approach? Alternatively, it may be possible to define the variance
appropriately in gls and use logits directly, but I'm not quite sure how
and suspect there's a more straight-forward alternative. Looking at
geoRglm suggests there may be solutions here, but it seems like it might
be overkill for what is, at first appearance at least, not such a
difficult problem. Maybe I'm just being statistically naive, but I think
I'm looking for a function somewhere between gls() and glmmPQL() and
would be grateful for any pointers.

Thanks very much,

Colin Beale

...
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Kronecker matrix product

2005-07-13 Thread Berwin A Turlach
> "BT" == Berwin A Turlach <[EMAIL PROTECTED]> writes:

>> tmp <- sapply(1:length(L), function(j, mat, list) 
kronecker(X[j,,drop=FALSE], L[[j]]), mat=X, list=L)

Uups, should proof read more carefully before hitting the send
button.  This should be, of course:

tmp <- sapply(1:length(L),
  function(j, mat, list) kronecker(mat[j,,drop=FALSE], list[[j]]),
  mat=X, list=L)

Cheers,

Berwin

== Full address 
Berwin A Turlach  Tel.: +61 (8) 6488 3338 (secr)   
School of Mathematics and Statistics+61 (8) 6488 3383 (self)  
The University of Western Australia   FAX : +61 (8) 6488 1028
35 Stirling Highway   
Crawley WA 6009e-mail: [EMAIL PROTECTED]
Australiahttp://www.maths.uwa.edu.au/~berwin

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] GARCH model using fSeries

2005-07-13 Thread Viljoen L <[EMAIL PROTECTED]>
I am trying to fit a GARCH model in fSeries but up to now without
success. I downloaded the the OxConsole Software together with  the
[EMAIL PROTECTED] 4.0 package and saved the oxl.exe and GarchOxModelling.ox 
files
correctly in the files C:\\Ox\\bin\\oxl.exe and
C:\\Ox\\lib\\GarchOxModelling.ox. 

 

My argument for R is the following with mintel the time series data

 
garchOxFit(formula.mean=~arma(0,0),formula.var=~garch(1,1),series=mintel
,cond.dist=c("gaussian"),include.mean=TRUE)

 

I receive the following error message and need help please.

Error in cat(list(...), file, sep, fill, labels, append) : 

argument 1 not yet handled by cat

 

Thanks

Helena

 

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Name for factor's levels with contr.sum

2005-07-13 Thread Ghislain Vieilledent
Good morning,

I used in R contr.sum for the contrast in a lme model:

> options(contrasts=c("contr.sum","contr.poly"))
> Septo5.lme<-lme(Septo~Variete+DateSemi,Data4.Iso,random=~1|LieuDit)
> intervals(Septo5.lme)$fixed
lower est. upper
(Intercept) 17.0644033 23.106110 29.147816
Variete1 9.5819873 17.335324 25.088661
Variete2 -3.3794907 6.816101 17.011692
Variete3 -0.5636915 8.452890 17.469472
Variete4 -22.8923812 -10.914912 1.062558
Variete5 -10.7152821 -1.865884 6.983515
Variete6 0.2743390 9.492175 18.710012
Variete7 -23.7943250 -15.070737 -6.347148
Variete8 -21.7310554 -12.380475 -3.029895
Variete9 -27.9782575 -17.480555 -6.982852
DateSemi1 -5.7903419 -1.547875 2.694592
DateSemi2 3.6571596 8.428417 13.199675
attr(,"label")
[1] "Fixed effects:"

How is it possible to obtain a return with the name of my factor's levels as 
with contr.treatment ?

Thanks for you help.

-- 
Ghislain Vieilledent
30, rue Bernard Ortet 31 500 TOULOUSE
06 24 62 65 07

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Please help me.....

2005-07-13 Thread Uwe Ligges
Fernando Espíndola wrote:

> Hi user R,
> 
> I am try to calculate the spectrum function in two time series. But when plot 
> a single serie, the labels in axes x is in the range 0.1 to 0.6 (frequency), 
> but when calculate de spectrum with ts.union function, the labels x is in the 
> range 1 to 6. I not understand why change the labels, and not know that is 
> ralationship. Samebody can hel me in this analysis.


Not so for me.
As the posting guide asks you to do: Can you specify a reproducible 
example, please.

Uwe Ligges


> Thank for all
> 
> fdo
> 
> Fernando Espindola R.
> Division Investigacion Pesquera
> Instituto de Fomento Pesquero
> Blanco 839
> Valparaiso - CHILE
> fono: 32 - 322442
> [EMAIL PROTECTED]
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] delete a row from a matrix

2005-07-13 Thread ecoinfo
Hi,
May I ask a related question, how to insert a row/column to a matrix?
 Thanks
Xiaohua

 On 13 Jul 2005 11:38:30 +0200, Peter Dalgaard <[EMAIL PROTECTED]> 
wrote: 
> 
> Navarre Sabine <[EMAIL PROTECTED]> writes:
> 
> > Hi,
> > I would like to know if it's possible to delete a rox from a matrix?
> >
> > > fig
> > [,1] [,2] [,3] [,4]
> > [1,] 0 1 0.0 0.2
> > [2,] 0 1 0.2 0.8
> > [3,] 0 1 0.8 1.0
> > [4,] 0 1 NA NA
> > [5,] 0 1 NA NA
> >
> > I would like to delete the 2 rows with NA!
> 
> fig <- fig[-c(4,5),]
> 
> or, more generally
> 
> fig <- fig[complete.cases(fig),]
> 
> or, even more generally
> 
> fig <- fig[!apply(is.na(fig), 1, any),]
> 
> --
> O__  Peter Dalgaard Øster Farimagsgade 5, Entr.B
> c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
> (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
> ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 



--

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] How to use the function "plot" as Matlab

2005-07-13 Thread Ted Harding
On 13-Jul-05 Prof Brian Ripley wrote:
> For most purposes it is easiest to use matplot() to plot superimposed 
> plots like this.  E.g.
> 
> x <- 0.1*(0:20)
> matplot(x, cbind(sin(x), cos(x)), "pl", pch=1)

This, and Robin's suggestion, are good practical solutions especially
when only a few graphs (2 or 3 or ... ) are involved. However, their
undelying principle is to accumulate auxiliary variables encapsulating
the graphs which will eventually be plotted.

However, once in a while I like to make a really messy graph of
superimposed sample paths of a simulated stochastic process, perhaps
with several dozen replications and many points (even 5000) along
each sample path. An example where this has a real practical point
is diffusion from the chimney stack of, say, an incinerator. The
resulting plot can give a good picture of the "average plume",
allowing the viewer to form an impression of the variation in
concentration along and on the fringes of the plume.

This is definitely a case where "dynamic rescaling" could save
hassle! Brian Ripley's suggestion involves first building a
matrix whose columns are the replications and rows the time-points,
and Robin Hankin's could be easily adapted to do the same,
though I think would involve a loop over columns and some very
long vectors.

How much easier it would be with dynamic scaling!

Best wishes,
Ted.

> On Wed, 13 Jul 2005, Robin Hankin wrote:
> 
>> Hi
>>
>> Ted makes a good point... matlab can dynamically rescale a plot in
>> response
>> to plot(...,add=TRUE) statements.
>>
>> For some reason which I do not understand, the rescaling issue is
>> only a problem
>> for me when working in "matlab mode".  It's not an issue when working
>> in "R mode"
>>
>> Ted pointed out that the following does not behave as intended:
>>
>>
>>>   x = 0.1*(0:20);
>>>   plot(x,sin(x))
>>>   lines(x,1.5*cos(x))
>>
>>
>> and presented an alternative method in which ylim was set by hand.  I
>> would suggest:
>>
>> x <- 0.1*(0:20)
>> y1 <- sin(x)
>> y2 <- 1.5*cos(x)
>>
>> plot(c(x,x),c(y1,y2),type="n")
>> lines(x,y1)
>> lines(x,y2)
>>
>> because this way, the axes are set by the plot() statement, but
>> nothing is plotted.
>>
>>
>> best wishes
>>
>> rksh
>>
>>
>>
>>
>>
>> On 13 Jul 2005, at 09:12, (Ted Harding) wrote:

>>>
>>> Although this is an over-worked query -- for which an answer, given
>>> that t="l" has been specified, is to use
>>>
>>>   plot(a,t="l",col="blue",ylim=c(0,10))
>>>   lines(b,t="l",col="red")
>>>
>>> there is a more interesting issue associated with it (given that
>>> Klebyn has come to it from a Matlab perspective).
>>>
>>> It's a long time since I used real Matlab, but I'll illustrate
>>> with octave which, in this respect, should be identical to Matlab.
>>>
>>> Octave:
>>>
>>> octave:1> x = 0.1*(0:20);
>>> octave:2> plot(x,sin(x))
>>>
>>> produces a graph of sin(x) with the y-axis scaled from 0 to 1.0
>>> Next:
>>>
>>> octave:3> hold on
>>> octave:4> plot(x,1.5*cos(x))
>>>
>>> superimposes a graph of 1.5*cos(x) with the y-axis automatically
>>> re-scaled from -1 to 1.5.
>>>
>>> This would not have happened in R with
>>>
>>>   x = 0.1*(0:20);
>>>   plot(x,sin(x))
>>>   lines(x,1.5*cos(x))
>>>
>>> where the 0 to 1.0 scaling of the first plot would be kept for
>>> the second, in which therefore part of the additional graph of
>>> 1.5*cos(x) would be "outside the box".
>>>
>>> No doubt like many others, I've been caught on the wrong foot
>>> by this more than a few times. The solution, of course (as
>>> illustrated in the reply to Klebyn above) is to anticipate
>>> what scaling you will need for all the graphs you intend to
>>> put on the same plot, and set up the scalings at the time
>>> of the first one using the options "xlim" and "ylim", e.g.:
>>>
>>>   x = 0.1*(0:20);
>>>   plot(x,sin(x),ylim=c(-1,1.5))
>>>   lines(x,1.5*cos(x))
>>>
>>> This is not always feasible, and indeed should not be expected
>>> to be feasible since part of the reason for using software
>>> like R in the first place is to compute what you do not know!
>>>
>>> Indeed, R will not allow you to use "xlim" or "ylim" once the
>>> first plot has been drawn.
>>>
>>> So in such cases I end up making a note (either on paper or,
>>> when I do really serious planning, in auxiliary variables)
>>> of the min's and max's for each graph, and then re-run the
>>> plotting commands with appropriate "xlim" and "ylim" scaling
>>> set up in the first plot so as to include all the subsequent
>>> graphs in entirety. (Even this strategy can be defeated if
>>> the succesive graphs represent simulations of long-tailed
>>> distributions. Unless of course I'm sufficiently alert to
>>> set the RNG seed first as well ... )
>>>
>>> I'm not sufficiently acquainted with the internals of "plot"
>>> and friends to anticipate the answer to this question; but,
>>> anyway, the question is:
>>>
>>>   Is it feasible to include, as a parameter to "plot", "lines"
>>>   and "points",
>>>
>>> rescale=FALSE
>>>
>>>   wher

[R] fitting Weibull distribution on observed percentiles

2005-07-13 Thread Claude Messiaen - Urc Necker
Hi , R Users

I'm trying to fit a Weibull ditribution on observed percentiles using nls but 
it doesn't work. Here is the code I use: is there something wrong ?

# p corresponds to percentiles
# and q to the observed values
# the datas are from the livebirth in france in 1998 distribution

ined1998   <- data.frame(   p = c(0.01638,0.49629,0.99284) , 
   q = c( 18,27,41))

a0<- 3 
b0<- ined1998$q[2]/gamma(1+1/a0)

dist_w1998 <- nls(ined1998$q ~ qweibull(ined1998$p , a0 ,   
  b0) , start = c(a0,b0) , data=ined1998)

I look forward your reply


C.M
URC NECKER 




[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: 答复: [R] fail in adding library in new version.

2005-07-13 Thread Duncan Murdoch
Ivy_Li wrote:
> Dear all,
> I really appreciate your help. I think I have a little advancement. ^_^
> Now I use the package.skeleton() function to create a template. I type:
>   f <- function(x,y) x+y
>   g <- function(x,y) x-y
>   d <- data.frame(a=1, b=2)
>   e <- rnorm(1000)
>   package.skeleton(list=c("f","g","d","e"), name="example")
> 
> in R. I know it will create a folder named "example" in the path of 
> "\R\rw2011\" I opened this folder, its format is similar as other library. 
> Then I modify it "DESCRIPTION" file:
>   Package: example
>   Version: 1.0-1
>   Date: 2005-07-09
>   Title: My first function
>   Author: Ivy <[EMAIL PROTECTED]>
>   Maintainer: Ivy <[EMAIL PROTECTED]>
>   Description: simple sum and subtract
>   License: GPL version 2 or later
>   Depends: R (>= 1.9), stats, graphics, utils
> 
> I don't whether I should modify other "README" file.
> When I enter the Dos environment, at first, into the D:\>, I type the 
> following code:
>   cd Program Files\R\rw2011\
>   bin\R CMD install /example
> 
> Well, there appeared error:
>   -- Making package example 
> adding build stamp to DESCRIPTION
> installing R files
> installing data files
> installing man source files
> installing indices
> not zipping data
> installing help
>>>> Building/Updating help pages for package 'example'
>Formats: text html latex example chm
> d texthtmllatex   example chm
> e texthtmllatex   example chm
> f texthtmllatex   example chm
>missing link(s):  ~~fun~~
> g texthtmllatex   example chm
>missing link(s):  ~~fun~~
>   hhc: not found
>   cp: cannot stat `D:/PROGRA~1/R/rw2011/example/chm/example.chm': No such 
> file or
>   directory
>   make[1]: *** [chm-example] Error 1
>   make: *** [pkg-example] Error 2
>   *** Installation of example failed ***
>   
>   Removing 'D:/PROGRA~1/R/rw2011/library/example'
> 
> That's it. I have to consult every R expert. Please help to solve this issue. 
> Thank you very much!

See the appendix "The Windows Toolset" in the R Installation and 
Administration manual.  You need to install those tools.

If you've done that, but decided not to use the Help Compiler (hhc), 
then you need to modify the MkRules file in RHOME/src/gnuwin32 to tell 
it not to try to build that kind of help.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?

2005-07-13 Thread Prof Brian Ripley
You seem to want to model spatially correlated bernoulli variables.
That's a difficult task, especially as these are bernoulli and not 
binomial(n>1).  With a much fuller description of the problem we may be 
able to help, but I at least have no idea of the aims of the analysis.

glmmPQL is designed for independent observations conditional on the 
random effects.

On Wed, 13 Jul 2005, Beale, Colin wrote:

> Hi.
>
> I'm trying to perform what should be a reasonably basic analysis of some
> spatial presence/absence data but am somewhat overwhelmed by the options
> available and could do with a helpful pointer. My researches so far
> indicate that if my data were normal, I would simply use gls() (in nlme)
> and one of the various corSpatial functions (eg. corSpher() to be
> analagous to similar analysis in SAS) with form = ~ x+y (and a nugget if
> appropriate). However, my data are binomial, so I need a different
> approach. Using various packages I could define a mixed model (eg using
> glmmPQL() in MASS) with similar correlation structure, but I seem to
> need to define a random effect to use glmmPQL(), and I don't have any.
> Could this requirement be switched off and still use the mixed model
> approach? Alternatively, it may be possible to define the variance
> appropriately in gls and use logits directly, but I'm not quite sure how
> and suspect there's a more straight-forward alternative. Looking at
> geoRglm suggests there may be solutions here, but it seems like it might
> be overkill for what is, at first appearance at least, not such a
> difficult problem. Maybe I'm just being statistically naive, but I think
> I'm looking for a function somewhere between gls() and glmmPQL() and
> would be grateful for any pointers.
>
> Thanks very much,
>
> Colin Beale
>
> ...
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] adding a factor column based on levels of another factor

2005-07-13 Thread Karen Kotschy
Hello

Thanks for the replies. Merge was what I needed! But Christoph, I will
keep your email. What you described is something else I have been
wondering how to do in R...

Thanks again
Karen

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] delete a row from a matrix

2005-07-13 Thread Adaikalavan Ramasamy
If you want to append to the first/last column or row, then use cbind or
rbind. It is a little tricky if you want to insert a row in the middle
somewhere. See insertRow in micEcon package.

Regards, Adai


On Wed, 2005-07-13 at 12:08 +0200, ecoinfo wrote:
> Hi,
> May I ask a related question, how to insert a row/column to a matrix?
>  Thanks
> Xiaohua
> 
>  On 13 Jul 2005 11:38:30 +0200, Peter Dalgaard <[EMAIL PROTECTED]> 
> wrote: 
> > 
> > Navarre Sabine <[EMAIL PROTECTED]> writes:
> > 
> > > Hi,
> > > I would like to know if it's possible to delete a rox from a matrix?
> > >
> > > > fig
> > > [,1] [,2] [,3] [,4]
> > > [1,] 0 1 0.0 0.2
> > > [2,] 0 1 0.2 0.8
> > > [3,] 0 1 0.8 1.0
> > > [4,] 0 1 NA NA
> > > [5,] 0 1 NA NA
> > >
> > > I would like to delete the 2 rows with NA!
> > 
> > fig <- fig[-c(4,5),]
> > 
> > or, more generally
> > 
> > fig <- fig[complete.cases(fig),]
> > 
> > or, even more generally
> > 
> > fig <- fig[!apply(is.na(fig), 1, any),]
> > 
> > --
> > O__  Peter Dalgaard ster Farimagsgade 5, Entr.B
> > c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
> > (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
> > ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> > 
> 
> 
> 
> --
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use the function "plot" as Matlab

2005-07-13 Thread Robin Hankin

On 13 Jul 2005, at 11:01, (Ted Harding) wrote:

> On 13-Jul-05 Prof Brian Ripley wrote:
>
>> For most purposes it is easiest to use matplot() to plot superimposed
>> plots like this.  E.g.
>>
>> x <- 0.1*(0:20)
>> matplot(x, cbind(sin(x), cos(x)), "pl", pch=1)
>>
>
> This, and Robin's suggestion, are good practical solutions especially
> when only a few graphs (2 or 3 or ... ) are involved. However, their
> undelying principle is to accumulate auxiliary variables encapsulating
> the graphs which will eventually be plotted.


Ted makes a good point here.  I would find this quite useful, for EDA  
(exploratory
data analysis) work, where one often needs to add new lines to a  
plot, one at a
time, in an ad hoc manner, just to "see what happens".

Would adding such functionality (perhaps via a new Boolean argument  
to plot(),
"rescaling", defaulting to FALSE, that enabled dynamic rescaling when  
plot(...,add=TRUE) is
executed) require quite a lot of low-level work?

best wishes

Robin



--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Memory question

2005-07-13 Thread Kenneth Roy Cabrera Torres
Hi R users and developers:

I want to know how can I save memory in R
for example:
  - saving on disk a matrix.
  - using again the matrix (changing their values)
  - saving again the matrix on disk in a different file.

The idea is that I have a process that generate several
matrices, but if I keep them all in memory it will overflow.

How can I save them in different files, so I use the same
amount of memory for each processed matrix?

Thank you for your help.

-- 
Kenneth Roy Cabrera Torres
Universidad Nacional de Colombia
Sede Medellin
Tel 430 9351
Cel 315 504 9339

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?

2005-07-13 Thread Beale, Colin
My data are indeed bernoulli and not binomial, as I indicated. The
dataset consists of points (grid refs) that are either locations of
events (animals) or random points (with no animal present). For each
point I have a suite of environmental covariates describing the habitat
at this point. I was anticipating some sort of function that could run:

function(present ~ env1 + env2 + env3 + x + y, correlation =
corSpher(form=~x+y), family = binomial)

where env1 to env3 are the habitat covariates, x & y the grid refs. If
my data were normal, I undertand I would use gls() with exactly this,
but drop the family requirement. As my data are bernoulli this is
clearly not possible, but I was hoping the analysis may be analagous?
The eventual aim is to firstly understand which environmental covariates
are important in determining presence and then to use habitat maps to
identify the areas expected to be most important.

Colin

-Original Message-
From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] 
Sent: 13 July 2005 11:30
To: Beale, Colin
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?

You seem to want to model spatially correlated bernoulli variables.
That's a difficult task, especially as these are bernoulli and not
binomial(n>1).  With a much fuller description of the problem we may be
able to help, but I at least have no idea of the aims of the analysis.

glmmPQL is designed for independent observations conditional on the
random effects.

On Wed, 13 Jul 2005, Beale, Colin wrote:

> Hi.
>
> I'm trying to perform what should be a reasonably basic analysis of 
> some spatial presence/absence data but am somewhat overwhelmed by the 
> options available and could do with a helpful pointer. My researches 
> so far indicate that if my data were normal, I would simply use gls() 
> (in nlme) and one of the various corSpatial functions (eg. corSpher() 
> to be analagous to similar analysis in SAS) with form = ~ x+y (and a 
> nugget if appropriate). However, my data are binomial, so I need a 
> different approach. Using various packages I could define a mixed 
> model (eg using
> glmmPQL() in MASS) with similar correlation structure, but I seem to 
> need to define a random effect to use glmmPQL(), and I don't have any.
> Could this requirement be switched off and still use the mixed model 
> approach? Alternatively, it may be possible to define the variance 
> appropriately in gls and use logits directly, but I'm not quite sure 
> how and suspect there's a more straight-forward alternative. Looking 
> at geoRglm suggests there may be solutions here, but it seems like it 
> might be overkill for what is, at first appearance at least, not such 
> a difficult problem. Maybe I'm just being statistically naive, but I 
> think I'm looking for a function somewhere between gls() and glmmPQL()

> and would be grateful for any pointers.
>
> Thanks very much,
>
> Colin Beale
>

...

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use the function "plot" as Matlab

2005-07-13 Thread Peter Dalgaard
(Ted Harding) <[EMAIL PROTECTED]> writes:

> 
> This is definitely a case where "dynamic rescaling" could save
> hassle! Brian Ripley's suggestion involves first building a
> matrix whose columns are the replications and rows the time-points,
> and Robin Hankin's could be easily adapted to do the same,
> though I think would involve a loop over columns and some very
> long vectors.
> 
> How much easier it would be with dynamic scaling!

Cue grid graphics... (and Paul's new book)

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] finalize objects

2005-07-13 Thread mkondrin
Hello!
How to set function for the whole R-class to be executed when the object 
is no more referenced from R and garbage collection takes place? I need 
the function to be applied for the whole class (let it be "someRClass") 
like this someRClass.on.finalize<-function(...){...}. I have found 
reg.finalizer function in the manuals, but it apply for the class 
instance, the thing that I do not like.
What is the solution?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Misbehaviour of DSE

2005-07-13 Thread Ajay Narottam Shah
On Mon, Jul 11, 2005 at 08:27:40AM -0700, Rob J Goedman wrote:
> Ajay,
> 
> After installing both setRNG (2004.4-1, source or binary) and dse  
> (2005.6-1, source only), it works fine.

Thanks! :-) Now dse1 works, but I get:

> library(dse2)
Warning message:
replacing previous import: acf in: namespaceImportFrom(self, asNamespace(ns)) 

Should I worry?

-- 
Ajay Shah   Consultant
[EMAIL PROTECTED]  Department of Economic Affairs
http://www.mayin.org/ajayshah   Ministry of Finance, New Delhi

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] testing for significance in random-effect factors using lmer

2005-07-13 Thread Douglas Bates
On 7/12/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi, I would like to know whether it is possible to obtain a value of
> significance for random effects when aplying the lme or related
> functions. The default output in R is just a variance and standard
> deviation measurement.
> 
> I feel it would be possible to obtain the significance of these random
> effects by comparing models with and without these effects. However,
> I'm not used to perform this in R and I would thank any easy guide or
> example.

It is possible to do a likelihood ratio test on two fitted lmer models
with different specifications of the random effects.  The p-value for
such a test is calculated using the chi-squared distribution from the
asymptotic theory which does not apply in most such comparisons
because the parameter for the null hypothesis is on the boundary of
the parameter region.  The p-value shown will be conservative (that
is, it is an upper bound on the true p-value).

For example

> library(mlmRev)
Loading required package: lme4
Loading required package: Matrix
Loading required package: lattice
> options(show.signif.stars = FALSE)
> (fm1 <- lmer(normexam ~ standLRT + sex + type + (1|school), Exam))
Linear mixed-effects model fit by REML
Formula: normexam ~ standLRT + sex + type + (1 | school) 
   Data: Exam 
  AIC  BIClogLik MLdeviance REMLdeviance
 9357.384 9395.237 -4672.692   9325.485 9345.384
Random effects:
 Groups   NameVariance Std.Dev.
 school   (Intercept) 0.084367 0.29046 
 Residual 0.562529 0.75002 
# of obs: 4059, groups: school, 65

Fixed effects:
   Estimate  Std. Error   DF t value  Pr(>|t|)
(Intercept) -1.7233e-03  5.4982e-02 4055 -0.0313   0.97500
standLRT 5.5983e-01  1.2448e-02 4055 44.9725 < 2.2e-16
sexM-1.6596e-01  3.2812e-02 4055 -5.0579 4.426e-07
typeSngl 1.6546e-01  7.7428e-02 4055  2.1369   0.03266
> (fm2 <- lmer(normexam ~ standLRT + sex + type + (standLRT|school), Exam))
Linear mixed-effects model fit by REML
Formula: normexam ~ standLRT + sex + type + (standLRT | school) 
   Data: Exam 
  AIC  BIClogLik MLdeviance REMLdeviance
 9316.573 9367.043 -4650.2879281.17 9300.573
Random effects:
 Groups   NameVariance Std.Dev. Corr  
 school   (Intercept) 0.082477 0.28719
  standLRT0.015081 0.12280  0.579 
 Residual 0.550289 0.74181
# of obs: 4059, groups: school, 65

Fixed effects:
   Estimate  Std. Error   DF t value  Pr(>|t|)
(Intercept)   -0.0207270.052548 4055 -0.3944   0.69327
standLRT   0.5541010.020117 4055 27.5433 < 2.2e-16
sexM  -0.1679710.032281 4055 -5.2034 2.054e-07
typeSngl   0.1763900.069587 4055  2.5348   0.01129
> anova(fm2, fm1)
Data: Exam
Models:
fm1: normexam ~ standLRT + sex + type + (1 | school)
fm2: normexam ~ standLRT + sex + type + (standLRT | school)
Df AIC BIC  logLik  Chisq Chi Df Pr(>Chisq)
fm1  6  9357.4  9395.2 -4672.7 
fm2  8  9316.6  9367.0 -4650.3 44.811  2  1.859e-10

At present the anova method for lmer objects does not allow comparison
with models that have no fixed effects.  Writing that code is on my
ToDo list but not currently at the top.  It is possible to use anova
to compare models fit by lme with models fit by lm (with the same
caveat about the calculated p-value being conservative).

An interesting alternative approach is to use Metropolis-Hastings
sampling for a MCMC chain based on the fitted model and create HPD
intervals from such a sample.  I have a prototype function to do this
for generalized linear mixed models in versions 0.97-3 and later of
the Matrix  package (currently hidden in the namespace and not
documented but the interested user can look at Matrix:::glmmMCMC).  It
happens that I developed the generalized linear version of this before
developing a version for linear mixed models but the lmm version will
be forthcoming.


> 
> Thanks.
> --
> 
> Eduardo Moisés García Roger
> 
> Institut Cavanilles de Biodiversitat i Biologia
> Evolutiva - ICBIBE.
> Tel. +34963543664
> Fax  +34963543670
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problems with MNP

2005-07-13 Thread Joan Serra
Hi all,

Does anybody have a hint on what may be going wrong in this R code? I 
mimic the sample code from the MNP developers but I seem unable to get the 
choice specific variables right.

Thanks,
Joan Serra

> rm(list=ls())
> library(foreign)
> small<-read.spss("small.sav")
Warning message:
small.sav: Unrecognized record type 7, subtype 13 encountered in system
file.
> library(MNP)
MNP: R Package for Fitting the Multinomial Probit Models
Version: 1.3-1
URL: http://www.princeton.edu/~kimai/research/MNP.html
>
>  res1 <- mnp(PREVOTE3 ~ 1, choiceX = list(1=UCLC, 2=UDLC, 3=UPLC),
Error: syntax error
>  cXnames = "ut", data = small, n.draws = 500, burnin =
100,
Error: syntax error
>  verbose = TRUE)
Error: syntax error
>
> # another try giving arbitrary names to the values of the dependent
variable
>
>  res1 <- mnp(PREVOTE3 ~ 1, choiceX = list(Clinton=UCLC, Dole=UDLC,
Perot=UPLC),
+  cXnames = "ut", data = small, n.draws = 500, burnin =
100,
+  verbose = TRUE)

The base category is `1'.

The total number of alternatives is 3.

Error in xmatrix.mnp(formula, data = eval.parent(data), choiceX =
call$choiceX,  :
 Error: Invalid input for `choiceX.'
  Some variables do not exist.
>
> # another try using a string type dependent variable
>
>  res1 <- mnp(PRVOTE3 ~ 1, choiceX = list(Clinton=UCLC, Dole=UDLC,
Perot=UPLC),
+  cXnames = "ut", data = small, n.draws = 500, burnin =
100,
+  verbose = TRUE)
Error in model.frame(formula, rownames, variables, varnames, extras,
extranames,  :
 invalid variable type
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Is there a working XML parser for the windows R Version 2.0.1

2005-07-13 Thread Soren Wilkening
Dear all

the regular XML package does not work correctly with the R 2.0.1 windows 
version.
Can anybody indicate a suitable alternative ?
I need to dynamically read, parse and process a HTML table in R that is 
available at a certain url.

Regards
Soren Wilkening

-- 
CENSIX Consulting
[EMAIL PROTECTED]

http://www.censix.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Is there a working XML parser for the windows R Version 2.0.1

2005-07-13 Thread Gabor Grothendieck
On 7/13/05, Soren Wilkening <[EMAIL PROTECTED]> wrote:
> Dear all
> 
> the regular XML package does not work correctly with the R 2.0.1 windows
> version.
> Can anybody indicate a suitable alternative ?
> I need to dynamically read, parse and process a HTML table in R that is
> available at a certain url.

Is this a one-time transfer or does the information change and you have
to do it completely automatically on a repeated basis?

In the first case, select the table, copy it to the clipboard, paste it into 
Excel and then transfer it to R from there.

In the second case you could use RDCOMClient or rcom packages to 
get it via Internet Explorer -- although that would be more involved and, 
in particular, requires that you learn the IE COM interface.  There may 
or may not be some discussion in the rcom list archives:
   http://mailman.csd.univie.ac.at/pipermail/rcom-l/
on this approach.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Is there a working XML parser for the windows R Version 2 .0.1

2005-07-13 Thread Tuszynski, Jaroslaw W.
I do not know if current XML package suppose to work for "windows R Version
2.0.1"; however, current version of XML works fine for current version of R
(2.1.1). Also, version of XML available when R Version 2.0.1 was current,
worked just fine as well. So the answer might be to update your R version.

My System is:
- R version: R 2.1.1
- Operating System: Win XP
- Compiler: mingw32-gcc-3.4.2

Jarek
\===

 Jarek Tuszynski, PhD.   o / \ 
 Science Applications International Corporation  <\__,|  
 (703) 676-4192   ">   \
 [EMAIL PROTECTED] `\



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Soren Wilkening
Sent: Wednesday, July 13, 2005 9:16 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Is there a working XML parser for the windows R Version 2.0.1

Dear all

the regular XML package does not work correctly with the R 2.0.1 windows
version.
Can anybody indicate a suitable alternative ?
I need to dynamically read, parse and process a HTML table in R that is
available at a certain url.

Regards
Soren Wilkening

--
CENSIX Consulting
[EMAIL PROTECTED]

http://www.censix.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] unexpected par('pin') behaviour

2005-07-13 Thread joerg van den hoff
hi everybody,

I noticed the following: in one of my scripts 'layout' is used to 
generate a (approx. square) grid of variable dimensions (depending on 
no. of input files). if the no. of subplots (grid cells) becomes 
moderately large  (say > 9) I use a construct like

   ###layout grid computation and set up occurs here###
...
   opar <- par(no.readonly = T);
   on.exit(par(opar))
   par(mar=c(4.1, 4.1, 1.1, .1))
   ###plotting occurs here
...

to reduce the figure margins to achieve a more compact display. apart 
from 'mar' no other par() setting is modified.

this works fine until the total number of subplots becomes too large 
("large" depending on the current size of the X11() graphics device 
window, e.g. 7 x 6 subplots for the default size fo x11()).

I then get the error message (only _after_ all plots are correctly 
displayed, i.e. obviously during execution of the above on.exit() call)

Error in par(opar) :
invalid value specified for graphics parameter "pin"


and par("pin") yields:

[1]  0.34864 -0.21419


which indeed is invalid (negative 2nd component).

I'm aware of this note from ?par:

The effect of restoring all the (settable) graphics parameters as
  in the examples is hard to predict if the device has been resized.
  Several of them are attempting to set the same things in different
  ways, and those last in the alphabet will win.  In particular, the
  settings of 'mai', 'mar', 'pin', 'plt' and 'pty' interact, as do
  the outer margin settings, the figure layout and figure region
  size.


but my problem occurs without any resizing of the x11() window prior to 
resetting par to par(opar).

any ideas, what is going on?

platform powerpc-apple-darwin7.9.0
arch powerpc
os   darwin7.9.0
system   powerpc, darwin7.9.0
status   Patched
major2
minor1.0
year 2005
month05
day  12
language R

regards,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Rd.sty, Sweave, tex4ht

2005-07-13 Thread Paulo Justiniano Ribeiro Jr
Hi
I'm using Sweave with tex4th to generate xhtml documents
However there seems to be a problem with \Link defined in
Rd.sty, since this is also defined in tex4ht.sty

My workaround was to replace the lines with
\newcommand{\Link} by \providecommand{\Link}
in Rd.sty

I'm therefore wondering whether this is the best solution, and if so
whether this could be changed in the original Rd.sty shipped with R.
A possible inconvenient is that \providecommand seems to be specific to
LaTeX and apparently does not work with TeX

Thanks
P.J.

Paulo Justiniano Ribeiro Jr
LEG (Laboratório de Estatística e Geoinformação)
Departamento de Estatística
Universidade Federal do Paraná
Caixa Postal 19.081
CEP 81.531-990
Curitiba, PR  -  Brasil
Tel: (+55) 41 3361 3573
Fax: (+55) 41 3361 3141
e-mail: [EMAIL PROTECTED]
http://www.est.ufpr.br/~paulojus

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] texture in barplots?

2005-07-13 Thread Adrian Dusa

Dear R list,

For some reason I am unable to access  neither search.r-project.org, nor 
http://finzi.psych.upenn.edu/ so I cannot search the archives for a possible 
answer (I Googled for this but didn't find anything).

Is it possible to draw barplots using a texture instead of colors, for a black 
and white printer?

TIA,
Adrian

-- 
Adrian Dusa
Arhiva Romana de Date Sociale
Bd. Schitu Magureanu nr.1
Tel./Fax: +40 21 3126618 \
  +40 21 3120210 / int.101


-- 
This message was scanned for spam and viruses by BitDefender.
For more information please visit http://linux.bitdefender.com/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Random fractal generator in R ?

2005-07-13 Thread Heinz Schild
Does one of the packages of R include functions to generate random  
fractals as for instance outlined in http://classes.yale.edu/fractals ?
Heinz Schild

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] texture in barplots?

2005-07-13 Thread Knut Krueger


Adrian Dusa schrieb:

>Is it possible to draw barplots using a texture instead of colors, for a black 
>and white printer?
>
>  
>
  barplot(height,.,density=c(4,6,8,10)  ...)

for each bar one number - this example is for a barplot with 4 bars.

with regards
Knut Krueger
http://www.biostatistic.de

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] any reference to get started clustering

2005-07-13 Thread Baoqiang Cao
Dear All,

  Just start to use the long expected R, my focus will be
doing clustering on microarray data, just wonder, anyone can
show me any references to conquer the steep learning curve?
Thanks!

Best regards,
 Baoqiang Cao

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] texture in barplots?

2005-07-13 Thread Adrian Dusa
On Wednesday 13 July 2005 17:36, Knut Krueger wrote:
> Adrian Dusa schrieb:
> >Is it possible to draw barplots using a texture instead of colors, for a
> > black and white printer?
>
>   barplot(height,.,density=c(4,6,8,10)  ...)
>
> for each bar one number - this example is for a barplot with 4 bars.
>
> with regards
> Knut Krueger
> http://www.biostatistic.de

Thank you, I read about density but they only seem to draw diagonal lines 
(differing in the number of lines per inch).
I am looking for different *types* of texture (i.e. maybe I could reverse the 
shading lines, or cross-lines or something like that).

All the best,
Adrian

-- 
Adrian Dusa
Arhiva Romana de Date Sociale
Bd. Schitu Magureanu nr.1
Tel./Fax: +40 21 3126618 \
  +40 21 3120210 / int.101


-- 
This message was scanned for spam and viruses by BitDefender.
For more information please visit http://linux.bitdefender.com/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?

2005-07-13 Thread Ruben Roa
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Beale, Colin
> Sent: 13 July 2005 10:15
> To: Prof Brian Ripley
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?
> 
> 
> My data are indeed bernoulli and not binomial, as I indicated. The
> dataset consists of points (grid refs) that are either locations of
> events (animals) or random points (with no animal present). For each
> point I have a suite of environmental covariates describing 
> the habitat at this point. I was anticipating some sort of function that 
> could run:
> 
> function(present ~ env1 + env2 + env3 + x + y, correlation =
> corSpher(form=~x+y), family = binomial)
> 
> where env1 to env3 are the habitat covariates, x & y the grid refs. If
> my data were normal, I undertand I would use gls() with exactly this,
> but drop the family requirement. As my data are bernoulli this is
> clearly not possible, but I was hoping the analysis may be analagous?
> The eventual aim is to firstly understand which environmental 
> covariates are important in determining presence and then to use habitat maps 
> to
> identify the areas expected to be most important.

This could be done with geoRglm. I did something similar last week, but without
covariates, only the spatial coordinates (i.e. my spatial process had 
expectation 
equal to a constant). If you are willing to sacrifice some spatial resolution 
you 
can create cells in your spatial data (say 100 m x 100 m) and in each cell 
count 
the number of successes in observing your spatial process and the number of 
trials. 
This will be a binomial problem and it seems to me to be the spatial equivalent 
of 
logistic regression where the predictor continuous variable is structured in 
bins 
and then events are counted in those bins. You can move to the R-sig-geo list
if you have questions about geoRglm
https://stat.ethz.ch/mailman/listinfo/r-sig-geo
Btw, this can also be done in SAS using the glimmix macro.
Ruben

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] any reference to get started clustering

2005-07-13 Thread Adaikalavan Ramasamy
Welcome to R. The learning curve is well worth the benefits.

If you are used to Eisen clustering and other fancy softwares to do
clustering, then you might be a little disappointed with R's clustering
ability of thousands of genes. But then again clustering is an
exploratory tool and I see no reason why it should be the only analysis
that some papers on microarray seem to focus on. My own bias aside,
there are functions called heatmap, hclust that might useful.

You might to check out the documentation and workshop section of
BioConductor (http://www.bioconductor.org/) which have R packages
designed for the analysis of genomic data.

But you should definitely try to read the Introduction to R first
http://cran.r-project.org/doc/manuals/R-intro.html and other documents.

Regards, Adai



On Wed, 2005-07-13 at 10:49 -0400, Baoqiang Cao wrote:
> Dear All,
> 
>   Just start to use the long expected R, my focus will be
> doing clustering on microarray data, just wonder, anyone can
> show me any references to conquer the steep learning curve?
> Thanks!
> 
> Best regards,
>  Baoqiang Cao
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R and stats courses

2005-07-13 Thread Highland Statistics Ltd.


Apologies for cross-posting

-- Final call. There are still 3 places available on each course  --

We would like to announce three statistics courses in and around Aberdeen, UK
Course 1: Regression, GLM, GAM, mixed modelling and tree models
Course 2: Multivariate analysis and multivariate time series analysis
Course 3: An introduction to R



Various modules are part of a EU MSc and UK MSc.

Course 1:
When: Monday 25 July until Friday 29 July 2005
Where: Scottish Agricultural College (SAC), Aberdeen, UK.
Course: "Analysing biological and environmental data using univariate methods".


Course 2:
When: Monday 1 August until Friday 5 August 2005
Where: Newburgh.
Course: "Analysing biological and environmental data using multivariate 
analysis and multivariate time series analysis


Course 3:
"An introduction to R".
When: 29-31 August 2005 (Monday-Wednesday).
Location: The Ythan hotel in Newburgh.
Host: Organised by Highland Statistics Ltd.


Information and registration:  www.brodgar.com/statscourse.htm

Kind regards,

Alain Zuur




Dr. Alain F. Zuur
Highland Statistics Ltd.
6 Laverock road
UK - AB41 6FN Newburgh

Tel: 0044 1358 788177
Email: [EMAIL PROTECTED]

Our statistics courses:
1. "Analysing biological and environmental data using univariate and 
multivariate methods".
2. "Analysing biological and environmental data using univariate methods"
3. "Analysing biological and environmental data using multivariate analysis 
and multivariate time series analysis"
4. "An introduction to R"

Brodgar: Software for univariate and multivariate analysis and multivariate 
time series analysis
Brodgar complies with R GNU GPL license

Statistical consultancy, courses, data analysis and software

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] texture in barplots?

2005-07-13 Thread Knut Krueger


Knut Krueger schrieb:

>Adrian Dusa schrieb:
>
>  
>
>>Is it possible to draw barplots using a texture instead of colors, for a 
>>black 
>>and white printer?
>>
>> 
>>
>>
>>
>  barplot(height,.,density=c(4,6,8,10)  ...)
>
>for each bar one number - this example is for a barplot with 4 bars.
>
>  
>
forgot something
you could also set the angle and the color of the lines
 
barplot(height,.,col=c("blue","blue","blue","green"),density=c(4,6,8,10),angle=c(15,30,60,90),
 ...)

with regards
Knut Krueger
http://www.biostatistic.de


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Name for factor's levels with contr.sum

2005-07-13 Thread Christoph Buser
Dear Ghislain

I do not know a general elegant solution, but for some
applications the following example may be helpful:

## Artificial data for demonstration: group is fixed, species is random 
dat <- data.frame(group = c(rep("A",20),rep("B",17),rep("C",24)),
  species = c(rep("sp1", 4), rep("sp2",5),   rep("sp3",5),
rep("sp4",6),  rep("sp5",2),  rep("sp6",5),  rep("sp7",3),
rep("sp8",3), rep("sp9",4), rep("sp10",6),  rep("sp11",6),
rep("sp12",6), rep("sp13",6)),
  area = rnorm(61))

## You can attach a contrast at your fixed factor of interest "group"
## Create the contrast you like to test (in our case contr.sum for 3
## levels)
mat <- contr.sum(3)
## You can add the names you want to see in the output
## Be carefull that you give the correct names to the concerned
## column. Otherwise there is the big danger of misinterpretation.
colnames(mat) <- c(": A against rest", ": B against rest")
## Attatch the contrast at your factor "group"
dat[,"group"] <- C(dat[,"group"],mat)
## Now calculate the lme
library(nlme)
reg.lme <- lme(area ~ group, data = dat, random = ~ 1|species)
summary(reg.lme)

Maybe someone has a better idea how to do it generally.

Hope this helps

Christoph Buser

--
Christoph Buser <[EMAIL PROTECTED]>
Seminar fuer Statistik, LEO C13
ETH (Federal Inst. Technology)  8092 Zurich  SWITZERLAND
phone: x-41-44-632-4673 fax: 632-1228
http://stat.ethz.ch/~buser/
--


Ghislain Vieilledent writes:
 > Good morning,
 > 
 > I used in R contr.sum for the contrast in a lme model:
 > 
 > > options(contrasts=c("contr.sum","contr.poly"))
 > > Septo5.lme<-lme(Septo~Variete+DateSemi,Data4.Iso,random=~1|LieuDit)
 > > intervals(Septo5.lme)$fixed
 > lower est. upper
 > (Intercept) 17.0644033 23.106110 29.147816
 > Variete1 9.5819873 17.335324 25.088661
 > Variete2 -3.3794907 6.816101 17.011692
 > Variete3 -0.5636915 8.452890 17.469472
 > Variete4 -22.8923812 -10.914912 1.062558
 > Variete5 -10.7152821 -1.865884 6.983515
 > Variete6 0.2743390 9.492175 18.710012
 > Variete7 -23.7943250 -15.070737 -6.347148
 > Variete8 -21.7310554 -12.380475 -3.029895
 > Variete9 -27.9782575 -17.480555 -6.982852
 > DateSemi1 -5.7903419 -1.547875 2.694592
 > DateSemi2 3.6571596 8.428417 13.199675
 > attr(,"label")
 > [1] "Fixed effects:"
 > 
 > How is it possible to obtain a return with the name of my factor's levels as 
 > with contr.treatment ?
 > 
 > Thanks for you help.
 > 
 > -- 
 > Ghislain Vieilledent
 > 30, rue Bernard Ortet 31 500 TOULOUSE
 > 06 24 62 65 07
 > 
 >  [[alternative HTML version deleted]]
 > 
 > __
 > R-help@stat.math.ethz.ch mailing list
 > https://stat.ethz.ch/mailman/listinfo/r-help
 > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] High resolution plots

2005-07-13 Thread Luis Tercero
Dear R-help community,

would any of you have a (preferably simple) example of a 
presentation-quality .png plot, i.e. one that looks like the .eps plots 
generated by R?  I am working with R 2.0.1 in WindowsXP and am having 
similar problems as Knut Krueger in printing high-quality plots.  I have 
looked at the help file and examples therein as well as others I have 
been able to find online but to no avail.  After many many tries I have 
to concede I cannot figure it out.

I would be very grateful for your help.

Regards,

Luis

-- 

Luis Tercero, M.Sc.

Engler-Bunte-Institut der Universität Karlsruhe (TH)
Bereich Wasserchemie

Engler-Bunte-Ring 1
D-76131 Karlsruhe

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] write.foreign, SPSS on Mac OS X

2005-07-13 Thread EJ Nikelski
Hello,

 Thanks for your help Brian. You are correct in assuming that I am 
trying to use write.foreign to export a data frame for use in SPSS, 
using the usual format:

 >write.foreign(df, dataFile, codeFile, package="SPSS")

Your suggestion that the unprintable characters represent UTF-8 encoded 
Unicode left and right double quotes also appears correct. Now, although 
the suggested work-around may well help, the foreign package does seem 
to be creating a corrupted file. That is, an entirely 8-bit ASCII file 
containing embedded UTF-8 double quotes is not valid by any standard -- 
and is thus unreadable by any editor on any platform. Perhaps I should 
look into filing a bug report on this to the foreign package maintainer.

Thanks,

Jim


Prof Brian Ripley wrote:
> On Tue, 12 Jul 2005, EJ Nikelski wrote:
> 
>> I have jut installed the foreign package (v 0.8-8) on my OS X
>> machine, and have a bit of a problem writing out a data frame in SPSS
>> format. Specifically, the code file (the .sps format file) seems to
>> write 3 unprintable hex values instead of double quotes. For example, in
>> the following output ...
>>
>> VALUE LABELS
>> /
>> immDel
>> 1 ###1###
>>  2 ###2###
>>  3 ###3###
>>
>>  ... emacs tells me that the left-sided ### are the hex codes E2 80 9C,
>> on the right we have E2 80 9D. I am supposing that I should be seeing
>> double-quotes here? Interestingly, the data file, which also contains a
>> quoted field, writes out the quotes without any problem. Does anyone
>> have any ideas?
> 
> 
> An idea. Those are left and right double quotes in UTF-8 and since MacOS X
> is usually in a UTF-8 locale they should be printable.  However, I 
> suspect that SPSS is expecting ASCII double quotation marks.
> 
> You haven't told us what you did, but I guess you used
> write.foreign(package="SPSS").  That calls writeForeignSPSS which 
> contains calls to dQuote(), and the latter are wrong if ASCII quotation 
> marks are needed.
> 
> A quick workaround is to use a non-UTF-8 locale: how you do that on ypur 
> OS depends on how you run R so please ask advice on the R-sig-mac list.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Where's iris?

2005-07-13 Thread Ruben Roa
Hi:
Where is the iris data set actually
located in the R 2.1.0 folder (under W XP)?
Is it a text file or it is a binary file?
Ruben

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] texture in barplots?

2005-07-13 Thread Knut Krueger


Knut Krueger schrieb:

>Adrian Dusa schrieb:
>
>  
>
>>Is it possible to draw barplots using a texture instead of colors, for a 
>>black 
>>and white printer?
>>
>> 
>>
>>
>>
>  barplot(height,.,density=c(4,6,8,10)  ...)
>
>for each bar one number - this example is for a barplot with 4 bars.
>
>  
>
forgot something
you could also set the angle and the color of the lines
 
barplot(height,.,col=c("blue","blue","blue","green"),density=c(4,6,8,10),angle=c(15,30,60,90),
 ...)

with regards
Knut Krueger
http://www.biostatistic.de

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Where's iris?

2005-07-13 Thread Berton Gunter
help.search("iris") tells you.

You should always try R's built-in help resources **before** posting. 

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 
"The business of the statistician is to catalyze the scientific learning
process."  - George E. P. Box
 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Ruben Roa
> Sent: Wednesday, July 13, 2005 6:55 AM
> To: R-help@stat.math.ethz.ch
> Subject: [R] Where's iris?
> 
> Hi:
> Where is the iris data set actually
> located in the R 2.1.0 folder (under W XP)?
> Is it a text file or it is a binary file?
> Ruben
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Where's iris?

2005-07-13 Thread Uwe Ligges
Ruben Roa wrote:

> Hi:
> Where is the iris data set actually
> located in the R 2.1.0 folder (under W XP)?
> Is it a text file or it is a binary file?

It is a special binary file in package datasets in the binary 
distribution. Just dump() or write.table() on the data to get a text 
representation of the data.

Uwe Ligges





> Ruben
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] exact values for p-values

2005-07-13 Thread Spencer Graves
  so my chi-square approximation was not very good:

 > pchisq(39540, 1, lower.tail=FALSE, log.p=TRUE)
[1] -19775.52
 > (pchisq(39540, 1, lower.tail=FALSE, log.p=TRUE)
+  /log(10))
[1] -8588.398

  ... roughly 1e-8588.  With a few hours with Abramowitz and Stegun, I 
suspect I could do better.

  spencer graves

David Duffy wrote:

>>This is obtained from F =39540 with df1 = 1, df2 = 7025.
>>Suppose am interested in exact value such as
>>
> 
> 
> If it were really necessary, you would have to move to multiple
> precision.  The gmp R package doesn't seem to yet cover this, but FMLIB
> (TOMS814, DM Smith) is a multiple precision f90 library that does
> include the incomplete beta -- it allows one to say for F(1,7025)=39540,
> P=6.31E-2886 (evaluated using 200 sign. digit arithmetic).  Results from
> R's pf() agree quite closely with the FMLIB results for less extreme values
> eg
> 
>>print(pf(1500,1,7025,lower=FALSE), digits=20)
> 
>  [1] 1.3702710894887480597e-297
> 
> cf   1.37027108948832580215549799419452388134616261215463681945E-297
> 
> 
> | David Duffy (MBBS PhD) ,-_|\
> | email: [EMAIL PROTECTED]  ph: INT+61+7+3362-0217 fax: -0101  / *
> | Epidemiology Unit, Queensland Institute of Medical Research   \_,-._/
> | 300 Herston Rd, Brisbane, Queensland 4029, Australia  GPG 4D0B994A v
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com 
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] maps drawing

2005-07-13 Thread m p
Hello,
is there a package in R that would allow map drawing:
coastlines, country/state boundaries, maybe
topography,
rivers etc?
Thanks for any guidance,
Mark

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] maps drawing

2005-07-13 Thread Uwe Ligges
m p wrote:

> Hello,
> is there a package in R that would allow map drawing:
> coastlines, country/state boundaries, maybe
> topography,
> rivers etc?

What about package maps?

Moreover, what about reading the posting guide and trying to search 
yourself at first. I think it is almost impossible not to find "maps" on 
CRAN in your case.

Uwe Ligges



> Thanks for any guidance,
> Mark
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Memory question

2005-07-13 Thread Huntsinger, Reid
One way I do this is to use Luke Tierney's "active bindings". I make an
active binding of a name to a function which either loads or saves the
object. Then the name behaves like the R object it's replacing. This works
nicely as long as I don't need lots of random accesses to the matrix.

I'd be happy to send the functions I use to do this.

Reid Huntsinger

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Kenneth Roy Cabrera
Torres
Sent: Wednesday, July 13, 2005 7:14 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Memory question
Importance: High


Hi R users and developers:

I want to know how can I save memory in R
for example:
  - saving on disk a matrix.
  - using again the matrix (changing their values)
  - saving again the matrix on disk in a different file.

The idea is that I have a process that generate several
matrices, but if I keep them all in memory it will overflow.

How can I save them in different files, so I use the same
amount of memory for each processed matrix?

Thank you for your help.

-- 
Kenneth Roy Cabrera Torres
Universidad Nacional de Colombia
Sede Medellin
Tel 430 9351
Cel 315 504 9339

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] maps drawing

2005-07-13 Thread Francisco J. Zagmutt
Try RSiteSearch("map") or  help.search("map")

Cheers

Francisco

>From: m p <[EMAIL PROTECTED]>
>To: r-help@stat.math.ethz.ch
>Subject: [R] maps drawing
>Date: Wed, 13 Jul 2005 09:15:33 -0700 (PDT)
>
>Hello,
>is there a package in R that would allow map drawing:
>coastlines, country/state boundaries, maybe
>topography,
>rivers etc?
>Thanks for any guidance,
>Mark
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! 
>http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Fieller's Conf Limits and EC50's

2005-07-13 Thread Stephen B. Cox
Folks

I have modified an existing function to calculate 'ec/ld/lc' 50 values 
and their associated Fieller's confidence limits.  It is based on 
EC50.calc (writtien by John Bailer)  - but also borrows from the dose.p 
(MASS) function.  My goal was to make the original EC50.calc function 
flexible with respect to 1) probability at which to calculate the 
expected dose, and 2) the link function.  I would appreciate comments 
about the validity of doing so!  In particular - I want to make sure 
that the confidence limit calculations are still valid when changing the 
link function.

ec.calc<-function(obj,conf.level=.95,p=.5) {

 # calculates confidence interval based upon Fieller's thm.
 # modified version of EC50.calc found in P&B Fig 7.22
 # now allows other link functions, using the calculations
 # found in dose.p (MASS)
 # SBC 19 May 05

call <- match.call()

 coef = coef(obj)
 vcov = summary.glm(obj)$cov.unscaled
 b0<-coef[1]
 b1<-coef[2]
 var.b0<-vcov[1,1]
 var.b1<-vcov[2,2]
 cov.b0.b1<-vcov[1,2]
 alpha<-1-conf.level
 zalpha.2 <- -qnorm(alpha/2)
 gamma <- zalpha.2^2 * var.b1 / (b1^2)
 eta = family(obj)$linkfun(p)  #based on calcs in V&R's dose.p

 EC50 <- (eta-b0)/b1

 const1 <- (gamma/(1-gamma))*(EC50 + cov.b0.b1/var.b1)

 const2a <- var.b0 + 2*cov.b0.b1*EC50 + var.b1*EC50^2 -
gamma*(var.b0 - cov.b0.b1^2/var.b1)

 const2 <- zalpha.2/( (1-gamma)*abs(b1) )*sqrt(const2a)

 LCL <- EC50 + const1 - const2
 UCL <- EC50 + const1 + const2

 conf.pts <- c(LCL,EC50,UCL)
 names(conf.pts) <- c("Lower","EC50","Upper")

 return(conf.pts,conf.level,call=call)
 }


Thanks

Stephen Cox

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Please help me.....

2005-07-13 Thread Francisco J. Zagmutt

Dear Fernando

Please read the posting guide. If you want to get an answer to your question 
you need to be specific about your analysis, and provide examples of the 
data structure and code that you tried and didn't work.



Francisco

--
Español

Estimado Fernando

Por favor lee la guía de publicación de preguntas en el foro.  Si quieres 
recibir una respuesta tienes que especificar los análisis que utilizaste y 
dar ejemplos con la estructura de tus datos y el código que no funcionó.


Francisco


From: Fernando Espíndola <[EMAIL PROTECTED]>
To: 
Subject: [R] Please help me.
Date: Tue, 12 Jul 2005 18:42:14 -0400

Hi user R,

I am try to calculate the spectrum function in two time series. But when 
plot a single serie, the labels in axes x is in the range 0.1 to 0.6 
(frequency), but when calculate de spectrum with ts.union function, the 
labels x is in the range 1 to 6. I not understand why change the labels, 
and not know that is ralationship. Samebody can hel me in this 
analysis.


Thank for all

fdo

Fernando Espindola R.
Division Investigacion Pesquera
Instituto de Fomento Pesquero
Blanco 839
Valparaiso - CHILE
fono: 32 - 322442
[EMAIL PROTECTED]


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] maps drawing

2005-07-13 Thread Spencer Graves
  help.search("asdf") only works if you have "asdf" in something that 
is installed.  RSiteSearch("asdf"), on the other hand, works for 
anything in the R archives.  This does NOT include, however, the 
contents of R News, which you can search via http://www.r-project.org/ 
-> Newsletter -> [Table of Contents (all issues)].

  spencer graves

Francisco J. Zagmutt wrote:

> Try RSiteSearch("map") or  help.search("map")
> 
> Cheers
> 
> Francisco
> 
> 
>>From: m p <[EMAIL PROTECTED]>
>>To: r-help@stat.math.ethz.ch
>>Subject: [R] maps drawing
>>Date: Wed, 13 Jul 2005 09:15:33 -0700 (PDT)
>>
>>Hello,
>>is there a package in R that would allow map drawing:
>>coastlines, country/state boundaries, maybe
>>topography,
>>rivers etc?
>>Thanks for any guidance,
>>Mark
>>
>>__
>>R-help@stat.math.ethz.ch mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-help
>>PLEASE do read the posting guide! 
>>http://www.R-project.org/posting-guide.html
> 
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com 
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] write.foreign, SPSS on Mac OS X

2005-07-13 Thread Thomas Lumley
On Wed, 13 Jul 2005, EJ Nikelski wrote:
>
> Your suggestion that the unprintable characters represent UTF-8 encoded
> Unicode left and right double quotes also appears correct. Now, although
> the suggested work-around may well help, the foreign package does seem
> to be creating a corrupted file. That is, an entirely 8-bit ASCII file
> containing embedded UTF-8 double quotes is not valid by any standard --
> and is thus unreadable by any editor on any platform. Perhaps I should
> look into filing a bug report on this to the foreign package maintainer.
>

It is a bug and has been fixed, but this isn't the reason. It's a bug 
because the format is wrong for SPSS.  The file is perfectly valid UTF-8: 
all the characters other than the double quotes are 7-bit ASCII and so 
have the same representation in UTF-8.

-thomas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] write.foreign, SPSS on Mac OS X

2005-07-13 Thread Prof Brian Ripley
On Wed, 13 Jul 2005, EJ Nikelski wrote:

> Hello,
>
> Thanks for your help Brian. You are correct in assuming that I am
> trying to use write.foreign to export a data frame for use in SPSS,
> using the usual format:
>
> >write.foreign(df, dataFile, codeFile, package="SPSS")
>
> Your suggestion that the unprintable characters represent UTF-8 encoded
> Unicode left and right double quotes also appears correct. Now, although
> the suggested work-around may well help, the foreign package does seem
> to be creating a corrupted file. That is, an entirely 8-bit ASCII file
> containing embedded UTF-8 double quotes is not valid by any standard --
> and is thus unreadable by any editor on any platform. Perhaps I should

Not true: any editor in a UTF-8 locale should be able to read a valid 
UTF-8 file, and there seems to be a problem with your OS.  No one said 
this had to be an ASCII file, and it will not be if the labels are not 
ASCII.

BTW, `8-bit ASCII' are mutually exclusive terms in file encodings.

> look into filing a bug report on this to the foreign package maintainer.

Which is R-core, and we are already working on a fix.

> Thanks,
>
> Jim
>
>
> Prof Brian Ripley wrote:
>> On Tue, 12 Jul 2005, EJ Nikelski wrote:
>>
>>> I have jut installed the foreign package (v 0.8-8) on my OS X
>>> machine, and have a bit of a problem writing out a data frame in SPSS
>>> format. Specifically, the code file (the .sps format file) seems to
>>> write 3 unprintable hex values instead of double quotes. For example, in
>>> the following output ...
>>>
>>> VALUE LABELS
>>> /
>>> immDel
>>> 1 ###1###
>>>  2 ###2###
>>>  3 ###3###
>>>
>>>  ... emacs tells me that the left-sided ### are the hex codes E2 80 9C,
>>> on the right we have E2 80 9D. I am supposing that I should be seeing
>>> double-quotes here? Interestingly, the data file, which also contains a
>>> quoted field, writes out the quotes without any problem. Does anyone
>>> have any ideas?
>>
>>
>> An idea. Those are left and right double quotes in UTF-8 and since MacOS X
>> is usually in a UTF-8 locale they should be printable.  However, I
>> suspect that SPSS is expecting ASCII double quotation marks.
>>
>> You haven't told us what you did, but I guess you used
>> write.foreign(package="SPSS").  That calls writeForeignSPSS which
>> contains calls to dQuote(), and the latter are wrong if ASCII quotation
>> marks are needed.
>>
>> A quick workaround is to use a non-UTF-8 locale: how you do that on ypur
>> OS depends on how you run R so please ask advice on the R-sig-mac list.
>>
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use the function "plot" as Matlab?

2005-07-13 Thread Greg Snow
>>> Ted Harding <[EMAIL PROTECTED]> 07/13/05 02:12AM >>>

[snip]

>>  I'm not sufficiently acquainted with the internals of "plot"
>>  and friends to anticipate the answer to this question; but,
>>  anyway, the question is:
>>  
>>Is it feasible to include, as a parameter to "plot", "lines"
>>and "points",
>>  
>>  rescale=FALSE
>>  
>>where this default value would maintain the existing behaviour
>>of these functions, while setting
>>  
>>  rescale=TRUE
>>  
>>would allow each succeeding plot, adding graphs using "points"
>>or "lines", to be rescaled (as in Matlab/Octave) so as to
>>include the entirety of each successive graph?

I tried editing the range in the result from "recordPlot" and it
crashed
R on my system, so it probably is not trivial to rescale an existing
plot
on the standard devices.  Part of the issue is what information is
saved
when the plot is made and what is recomputed each time.  Apparently
octave/matlab and R do this quite differently.

Others have suggested using matplot. you can also manually save all
the relevent information yourself to redo the plots.  

The other option is to use a different graphics device that supports
rescaling.  One option is "rgl" using the rgl package and the 
rgl.lines function (it will auto rescale, but seems overkill for this
case).

Another option is to go a similar route to octave and to have gnuplot
do the actual plotting (and keep the info to rescale when needed).

Below are some functions I wrote for passing the data to gnuplot
(I am working on windows and downloaded the win32 version of 
gnuplot from http://www.gnuplot.info).  Some editing may be 
neccessary for these to work on other systems.  If there is interest
I may debug and expand these functions and include them in a 
package.

To do the original example with these functions:

gp.open()
x <- seq(0,2,0.1)
gp.plot(x,sin(x), type='l')
gp.plot(x,1.5*cos(x), type='l', add=T)
gp.send('set yrange [-1.6:1.6]') # force my own range
gp.send()
gp.send('set yrange [*:*]') # return to autoscaling
gp.send()
gp.close()


The function definitions are:

gp.open <- function(where='c:/progra~1/GnuPlot/bin/pgnuplot.exe'){
.gp <<- pipe(where,'w')
.gp.tempfiles <<- character(0)
invisible(.gp)
}


gp.close <- function(pipe=.gp){
cat("quit\n",file=pipe)
close(pipe)
if(exists('.gp.tempfiles')){
unlink(.gp.tempfiles)
rm(.gp.tempfiles,pos=1)
}
rm(.gp,pos=1)
invisible()
}

gp.send <- function(cmd='replot',pipe=.gp){
cat(cmd, file=pipe)
cat("\n",file=pipe)
invisible()
}

gp.plot <- function(x,y,type='p',add=F, title=deparse(substitute(y)), 
pipe=.gp){
tmp <- tempfile()
.gp.tempfiles <<- c(.gp.tempfiles, tmp)

write.table( cbind(x,y), tmp, row.names=FALSE, col.names=FALSE
)
w <- ifelse(type=='p', 'points', 'lines')
r <- ifelse(add, 'replot', 'plot')

cat( paste(r," '",tmp,"' with ",w," title
'",title,"'\n",sep=''), 
file=pipe)
invisible()
}

Hope this helps,

Greg Snow, Ph.D.
Statistical Data Center, LDS Hospital
Intermountain Health Care
[EMAIL PROTECTED]
(801) 408-8111

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Help with Mahalanobis

2005-07-13 Thread Thomas Petzoldt
Hello,

a proposed solution of Bill Venables is archieved on the S-News mailing
list:

http://www.biostat.wustl.edu/archives/html/s-news/2001-07/msg00035.html

and if I remember it correctly (and if the variance matrix is estimated
from the data), another similar way is simply to use the Euclidean
distance of rescaled scores of a pricipal component analysis, e.g.:

data(iris)

dat <- iris[1:4] # without the species names

z <- svd(scale(dat, scale=FALSE))$u
cl <- hclust(dist(z), method="ward")
plot(cl, labels=iris$Species)

 or alternatively: 

pc <- princomp(dat, cor=FALSE)

pcdata <- as.data.frame(scale(pc$scores))
cl <- hclust(dist(pcdata), method="ward")
plot(cl, labels=iris$Species)


Hope it helps!

Thomas P.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problems with MNP

2005-07-13 Thread Kosuke Imai
Hi Joan,
  You need to do:

  
 res1 <- mnp(PREVOTE3 ~ 1, choiceX = list("1"=UCLC, "2"=UDLC, "3"=UPLC),
 cXnames = "ut", data = small, verbose = TRUE)

  
The quotations for "1" etc. are necessary because those are names of the
list elements. Also, you should check the convergence of the Markov chain
as 500 draws is typically not. In our Journal of Statistical Software
paper (available at http://www.princeton.edu/~kimai/research/MNP.html), we
show how to do this through some examples.

Best,
Kosuke

  
-
Kosuke Imai   Office: Corwin Hall 041
Assistant Professor   Phone: 609-258-6601
Department of PoliticseFax:  973-556-1929
Princeton University  Email: [EMAIL PROTECTED]
Princeton, NJ 08544-1012  http://www.princeton.edu/~kimai
-

> > From: Joan Serra <[EMAIL PROTECTED]>
> > Date: July 13, 2005 6:14:31 AM PDT
> > To: r-help@stat.math.ethz.ch
> > Subject: [R] problems with MNP
> >
> >
> > Hi all,
> >
> > Does anybody have a hint on what may be going wrong in this R code? I
> > mimic the sample code from the MNP developers but I seem unable to  
> > get the
> > choice specific variables right.
> >
> > Thanks,
> > Joan Serra
> >
> >
> >> rm(list=ls())
> >> library(foreign)
> >> small<-read.spss("small.sav")
> >>
> > Warning message:
> > small.sav: Unrecognized record type 7, subtype 13 encountered in  
> > system
> > file.
> >
> >> library(MNP)
> >>
> > MNP: R Package for Fitting the Multinomial Probit Models
> > Version: 1.3-1
> > URL: http://www.princeton.edu/~kimai/research/MNP.html
> >
> >>
> >>  res1 <- mnp(PREVOTE3 ~ 1, choiceX = list(1=UCLC, 2=UDLC,  
> >> 3=UPLC),
> >>
> > Error: syntax error
> >
> >>  cXnames = "ut", data = small, n.draws = 500,  
> >> burnin =
> >>
> > 100,
> > Error: syntax error
> >
> >>  verbose = TRUE)
> >>
> > Error: syntax error
> >
> >>
> >> # another try giving arbitrary names to the values of the dependent
> >>
> > variable
> >
> >>
> >>  res1 <- mnp(PREVOTE3 ~ 1, choiceX = list(Clinton=UCLC,  
> >> Dole=UDLC,
> >>
> > Perot=UPLC),
> > +  cXnames = "ut", data = small, n.draws = 500,  
> > burnin =
> > 100,
> > +  verbose = TRUE)
> >
> > The base category is `1'.
> >
> > The total number of alternatives is 3.
> >
> > Error in xmatrix.mnp(formula, data = eval.parent(data), choiceX =
> > call$choiceX,  :
> >  Error: Invalid input for `choiceX.'
> >   Some variables do not exist.
> >
> >>
> >> # another try using a string type dependent variable
> >>
> >>  res1 <- mnp(PRVOTE3 ~ 1, choiceX = list(Clinton=UCLC, Dole=UDLC,
> >>
> > Perot=UPLC),
> > +  cXnames = "ut", data = small, n.draws = 500,  
> > burnin =
> > 100,
> > +  verbose = TRUE)
> > Error in model.frame(formula, rownames, variables, varnames, extras,
> > extranames,  :
> >  invalid variable type
> >
> >>
> >>
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! http://www.R-project.org/posting- 
> > guide.html
> >
> 
> 
> --
> Andrew D. Martin, Ph.D.
> Associate Professor of Political Science
> Director, Program in Applied Statistics and Computation
> Professor of Law (by courtesy)
> Washington University in St. Louis
> 
> (314) 935-5863 (Office)
> (314) 753-8377 (Cell)
> (314) 935-5856 (Fax)
> 
> Office: Eliot Hall 326
> Email: [EMAIL PROTECTED]
> WWW:   http://adm.wustl.edu
> 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to increase memory for R on Soliars 10 with 16GB and 64bit R

2005-07-13 Thread Dongseok Choi
Thank you very much for your help!!
Now, it runs without any problem.

Is it going to be fixed in the next release?

Thanks again,
Dongseok




Dongseok Choi, Ph.D.
Assistant Professor
Division of Biostatistics
Department of Public Health & Preventive Medicine
Oregon Health & Science University
3181 SW Sam Jackson Park Road, CB-669
Portland, OR 97239-3098
TEL) 503-494-5336
FAX) 503-494-4981
[EMAIL PROTECTED]

>>> "Prof Brian Ripley" <[EMAIL PROTECTED]> 07/13/05 12:03 AM >>>
On Tue, 12 Jul 2005, Dongseok Choi wrote:

>  My machine is SUN Java Workstation 2100 with 2 AMD Opteron CPUs and 16GB RAM.
>  R is compiled as 64bit by using SUN compilers.
>  I trying to fit quantile smoothing on my data and I got an message as below.
>
>> fit1<-rqss(z1~qss(cbind(x,y),lambda=la1),tau=t1)
> Error in as.matrix.csr(diag(n)) :
cannot allocate memory block of size 2496135168
>
>  The lengths of vector x and y are both 17664.
>  I tried and found that the same command ran with x[1:16008] and y[1:16008].
>  So, it looks to me a memory related problem, but I'm not sure how I can 
> allocate memory block.
>   I read the command line option but not sure what do to with it.
>   Could you help me on this?

It is trying to allocate a single memory block of size over 2^31-1 bytes. 
R internally uses ints for sizes of vectors and that is a limit (see 
help("Memory-limits") ).  However, it is intended that on 64-bit systems 
that there is a limit here of 8*(2^31-1) but there was a typo.  Please 
change line 1534 of src/main/memory.c to

#if SIZEOF_LONG > 4

and re-compile.

-- 
Brian D. Ripley,  [EMAIL PROTECTED] 
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/ 
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] plot the number of replicates at the same point

2005-07-13 Thread Kerry Bush
Dear R-helper,
  I want to plot the following-like data:

x y
1 1
1 1
1 2
1 3
1 3
1 4
..

In the plot that produced, I don't want to show the
usual circles or points. Instead, I want to show the
number of replicates at that point. e.g. at the
position of (1,1), there are 2 obsevations, so a
number '2' will be displayed in the plot.
Is my narrative clear? Is there a way to make the plot
in R?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] convert to chron objects

2005-07-13 Thread Young Cho
Hi,

I have a column of a dataframe which has time stamps
like:

> eh$t[1]
[1] 06/05/2005 01:15:25

and was wondering how to convert it to chron variable.
Thanks a lot.

Young.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] plot the number of replicates at the same point

2005-07-13 Thread Jean Eid
You can do the following (don't know it this is the most efficient way but
it works)

temp<-read.table("your file to read the data", header=T)
temp1<-table(temp)
plot(temp$x, temp$y, cex=0)
text(as.numeric(rownames(temp1)), as.numeric(colnames(temp1)), temp1)

HTH


On Wed, 13 Jul 2005, Kerry Bush wrote:

> Dear R-helper,
>   I want to plot the following-like data:
>
> x y
> 1 1
> 1 1
> 1 2
> 1 3
> 1 3
> 1 4
> ..
>
> In the plot that produced, I don't want to show the
> usual circles or points. Instead, I want to show the
> number of replicates at that point. e.g. at the
> position of (1,1), there are 2 obsevations, so a
> number '2' will be displayed in the plot.
> Is my narrative clear? Is there a way to make the plot
> in R?
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Efficient testing for +ve definiteness

2005-07-13 Thread Makram Talih
Dear R-users,

Is there a preferred method for testing whether a real symmetric matrix is
positive definite? [modulo machine rounding errors.]

The obvious way of computing eigenvalues via "E <- eigen(A, symmetric=T,
only.values=T)$values" and returning the result of "!any(E <= 0)" seems
less efficient than going through the LU decomposition invoked in
"determinant.matrix(A)" and checking the sign and (log) modulus of the
determinant.

I suppose this has to do with the underlying C routines. Any thoughts or
anecdotes?

Many Thanks,

Makram Talih

--
Makram Talih, Ph.D.
Assistant Professor
Department of Mathematics and Statistics
Hunter College of the City University of New York
695 Park Avenue, Room 905 HE
New York, NY 10021

Website: http://stat.hunter.cuny.edu/talih
E-mail: [EMAIL PROTECTED]
Tel: 212-772-5308
Fax: 212-772-4858

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] read.table

2005-07-13 Thread Weiwei Shi
Hi,
I have a question on read.table.

I have a dataset with 273,000 lines and 195 columns. I used the
read.table to load the data into R:
trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
I found it takes forever.

then I run 1/10 of the data (test) using read.table again. And this
time it finished quickly. So, there might be something wrong in my
data format causing that problem.

then, my question is, is there a way in R to track at which line,
something wrong occurs?

Thanks,

Weiwei


-- 
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Weiwei Shi
add:
I used
trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529, ncol=195)

it is done. 
so it seems that I just have no patience to wait for half an hour :)

but i still have that question:
is there a way to track the process if it takes too long. Could we
stop in the middle to see at which line it "hesitates" to move on?

regards,

weiwei


On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> Hi,
> I have a question on read.table.
> 
> I have a dataset with 273,000 lines and 195 columns. I used the
> read.table to load the data into R:
> trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> I found it takes forever.
> 
> then I run 1/10 of the data (test) using read.table again. And this
> time it finished quickly. So, there might be something wrong in my
> data format causing that problem.
> 
> then, my question is, is there a way in R to track at which line,
> something wrong occurs?
> 
> Thanks,
> 
> Weiwei
> 
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
> 


-- 
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] convert to chron objects

2005-07-13 Thread Gabor Grothendieck
On 7/13/05, Young Cho <[EMAIL PROTECTED]> wrote: 
> 
> Hi,
> 
> I have a column of a dataframe which has time stamps
> like:
> 
> > eh$t[1]
> [1] 06/05/2005 01:15:25
> 
> and was wondering how to convert it to chron variable.
> Thanks a lot.

   Try this:

# test data frame eh containing a factor variable t
eh <- data.frame(t = c("06/05/2005 01:15:25", "06/07/2005 01:15:25"))

# substring converts factor to character and extracts substring
chron(dates = substring(eh$t, 1, 10), times = substring(eh$t, 12))

See ?chron for more info. There is an article on dates in
R News 4/1 and although it does not specifically answer this
question it may be useful with chron and also provides a 
reference to more chron info elsewhere.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Gabor Grothendieck
You could use the nlines= argument to scan to read in a 
portion at a time.


On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote: 
> 
> add:
> I used
> trn<-matrix(scan('train1.dat', sep='|', na.string='.'), nrow=273529, 
> ncol=195)
> 
> it is done.
> so it seems that I just have no patience to wait for half an hour :)
> 
> but i still have that question:
> is there a way to track the process if it takes too long. Could we
> stop in the middle to see at which line it "hesitates" to move on?
> 
> regards,
> 
> weiwei
> 
> 
> On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > Hi,
> > I have a question on read.table.
> >
> > I have a dataset with 273,000 lines and 195 columns. I used the
> > read.table to load the data into R:
> > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > I found it takes forever.
> >
> > then I run 1/10 of the data (test) using read.table again. And this
> > time it finished quickly. So, there might be something wrong in my
> > data format causing that problem.
> >
> > then, my question is, is there a way in R to track at which line,
> > something wrong occurs?
> >
> > Thanks,
> >
> > Weiwei
> >
> >
> > --
> > Weiwei Shi, Ph.D
> >
> > "Did you always know?"
> > "No, I did not. But I believed..."
> > ---Matrix III
> >
> 
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] High resolution plots

2005-07-13 Thread Gabor Grothendieck
On 7/13/05, Luis Tercero <[EMAIL PROTECTED]> wrote: 
> 
> Dear R-help community,
> 
> would any of you have a (preferably simple) example of a
> presentation-quality .png plot, i.e. one that looks like the .eps plots
> generated by R? I am working with R 2.0.1 in WindowsXP and am having
> similar problems as Knut Krueger in printing high-quality plots. I have
> looked at the help file and examples therein as well as others I have
> been able to find online but to no avail. After many many tries I have
> to concede I cannot figure it out.
> 
> I would be very grateful for your help.

   If you want the highest resolution use a vector format,
not a bitmapped format such as png. See:

http://maths.newcastle.edu.au/~rking/R/help/04/02/1168.html

for some background.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Gabor Grothendieck
[I had some email problems and am sending this again.  Sorry
if you get it twice.]

You could use the nlines= argument to scan to read in a 
portion at a time. 
 
 
> 
> 
> On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote: 
> > add:
> > I used
> > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529, 
> > ncol=195)
> > 
> > it is done.
> > so it seems that I just have no patience to wait for half an hour :)
> > 
> > but i still have that question:
> > is there a way to track the process if it takes too long. Could we
> > stop in the middle to see at which line it "hesitates" to move on? 
> > 
> > regards,
> > 
> > weiwei
> > 
> > 
> > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > > I have a question on read.table.
> > >
> > > I have a dataset with 273,000 lines and 195 columns. I used the 
> > > read.table to load the data into R:
> > > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > > I found it takes forever.
> > >
> > > then I run 1/10 of the data (test) using read.table again. And this
> > > time it finished quickly. So, there might be something wrong in my
> > > data format causing that problem.
> > >
> > > then, my question is, is there a way in R to track at which line,
> > > something wrong occurs? 
> > >
> > > Thanks,
> > >
> > > Weiwei
> > >
> > >
> > > --
> > > Weiwei Shi, Ph.D
> > >
> > > "Did you always know?"
> > > "No, I did not. But I believed..."
> > > ---Matrix III 
> > >
> > 
> > 
> > --
> > Weiwei Shi, Ph.D
> > 
> > "Did you always know?"
> > "No, I did not. But I believed..."
> > ---Matrix III
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> > 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] convert to chron objects

2005-07-13 Thread Gabor Grothendieck
[I had some emails problems so I am sending this again.  Sorry
if you get it twice.]

On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> 
> 
> On 7/13/05, Young Cho <[EMAIL PROTECTED]> wrote: 
> > Hi,
> > 
> > I have a column of a dataframe which has time stamps
> > like:
> > 
> > > eh$t[1]
> > [1] 06/05/2005 01:15:25 
> > 
> > and was wondering how to convert it to chron variable.
> > Thanks a lot.
>  
> 
> 
>  
> 
Try this:
 
# test data frame eh containing a factor variable t
eh <- data.frame(t = c("06/05/2005 01:15:25", "06/07/2005 01:15:25"))
 
# substring converts factor to character and extracts substring
chron(dates = substring(eh$t, 1, 10), times = substring(eh$t, 12))
 
See ?chron for more info.  There is an article on dates in
R News 4/1 and although it does not specifically answer this
question it may be useful with chron and also provides a 
reference to more chron info elsewhere.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] High resolution plots

2005-07-13 Thread Gabor Grothendieck
[I had some email problems so I am sending this again.
Sorry if you get this twice.]

On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> 
> 
> On 7/13/05, Luis Tercero
> <[EMAIL PROTECTED]> wrote: 
> > Dear R-help community,
> > 
> > would any of you have a (preferably simple) example of a
> > presentation-quality .png plot, i.e. one that looks like the .eps plots
> > generated by R?  I am working with R 2.0.1 in WindowsXP and am having
> > similar problems as Knut Krueger in printing high-quality plots.  I have
> > looked at the help file and examples therein as well as others I have 
> > been able to find online but to no avail.  After many many tries I have
> > to concede I cannot figure it out.
> > 
> > I would be very grateful for your help.
>  
>  
> 
> 

If you want the highest resolution use a vector format,
not a bitmapped format such as png.   See:

http://maths.newcastle.edu.au/~rking/R/help/04/02/1168.html
 
for some background.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Weiwei Shi
that sort of works for my purpose.

btw, is there a bettter way to get data.frame by passing around
matrix(). Since I could not find data.frame() with nrow or ncol
arguments. so i have to use matrix first and then as.data.frame to
convert it.

is there any other (better) way?

weiwei

On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> 
> You could use the nlines= argument to scan to read in a 
> portion at a time.
> 
> 
>  
> On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote: 
> > 
> > add:
> > I used
> > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529,
> ncol=195)
> > 
> > it is done.
> > so it seems that I just have no patience to wait for half an hour :)
> > 
> > but i still have that question:
> > is there a way to track the process if it takes too long. Could we
> > stop in the middle to see at which line it "hesitates" to move on? 
> > 
> > regards,
> > 
> > weiwei
> > 
> > 
> > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > > I have a question on read.table.
> > >
> > > I have a dataset with 273,000 lines and 195 columns. I used the 
> > > read.table to load the data into R:
> > > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > > I found it takes forever.
> > >
> > > then I run 1/10 of the data (test) using read.table again. And this
> > > time it finished quickly. So, there might be something wrong in my
> > > data format causing that problem.
> > >
> > > then, my question is, is there a way in R to track at which line,
> > > something wrong occurs? 
> > >
> > > Thanks,
> > >
> > > Weiwei
> > >
> > >
> > > --
> > > Weiwei Shi, Ph.D
> > >
> > > "Did you always know?"
> > > "No, I did not. But I believed..."
> > > ---Matrix III 
> > >
> > 
> > 
> > --
> > Weiwei Shi, Ph.D
> > 
> > "Did you always know?"
> > "No, I did not. But I believed..."
> > ---Matrix III
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
> > 
> 
>  


-- 
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Gabor Grothendieck
Maybe you don't really need a data frame in the first place?
You were concerned with speed and matrices tend to 
have higher performance than data frames.

On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> that sort of works for my purpose.
> 
> btw, is there a bettter way to get data.frame by passing around
> matrix(). Since I could not find data.frame() with nrow or ncol
> arguments. so i have to use matrix first and then as.data.frame to
> convert it.
> 
> is there any other (better) way?
> 
> weiwei
> 
> On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> >
> > You could use the nlines= argument to scan to read in a
> > portion at a time.
> >
> >
> >
> > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > >
> > > add:
> > > I used
> > > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529,
> > ncol=195)
> > >
> > > it is done.
> > > so it seems that I just have no patience to wait for half an hour :)
> > >
> > > but i still have that question:
> > > is there a way to track the process if it takes too long. Could we
> > > stop in the middle to see at which line it "hesitates" to move on?
> > >
> > > regards,
> > >
> > > weiwei
> > >
> > >
> > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > Hi,
> > > > I have a question on read.table.
> > > >
> > > > I have a dataset with 273,000 lines and 195 columns. I used the
> > > > read.table to load the data into R:
> > > > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > > > I found it takes forever.
> > > >
> > > > then I run 1/10 of the data (test) using read.table again. And this
> > > > time it finished quickly. So, there might be something wrong in my
> > > > data format causing that problem.
> > > >
> > > > then, my question is, is there a way in R to track at which line,
> > > > something wrong occurs?
> > > >
> > > > Thanks,
> > > >
> > > > Weiwei
> > > >
> > > >
> > > > --
> > > > Weiwei Shi, Ph.D
> > > >
> > > > "Did you always know?"
> > > > "No, I did not. But I believed..."
> > > > ---Matrix III
> > > >
> > >
> > >
> > > --
> > > Weiwei Shi, Ph.D
> > >
> > > "Did you always know?"
> > > "No, I did not. But I believed..."
> > > ---Matrix III
> > >
> > > __
> > > R-help@stat.math.ethz.ch mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html
> > >
> >
> >
> 
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Weiwei Shi
there is another problem since last time i forgot "byrow" :(
> trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529, 
> ncol=195, byrow=T)
Read 53338155 items
Error: cannot allocate vector of size 416704 Kb

please help with this 'simple' reading task.

weiwei

On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> that sort of works for my purpose.
> 
> btw, is there a bettter way to get data.frame by passing around
> matrix(). Since I could not find data.frame() with nrow or ncol
> arguments. so i have to use matrix first and then as.data.frame to
> convert it.
> 
> is there any other (better) way?
> 
> weiwei
> 
> On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> >
> > You could use the nlines= argument to scan to read in a
> > portion at a time.
> >
> >
> >
> > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > >
> > > add:
> > > I used
> > > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529,
> > ncol=195)
> > >
> > > it is done.
> > > so it seems that I just have no patience to wait for half an hour :)
> > >
> > > but i still have that question:
> > > is there a way to track the process if it takes too long. Could we
> > > stop in the middle to see at which line it "hesitates" to move on?
> > >
> > > regards,
> > >
> > > weiwei
> > >
> > >
> > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > Hi,
> > > > I have a question on read.table.
> > > >
> > > > I have a dataset with 273,000 lines and 195 columns. I used the
> > > > read.table to load the data into R:
> > > > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > > > I found it takes forever.
> > > >
> > > > then I run 1/10 of the data (test) using read.table again. And this
> > > > time it finished quickly. So, there might be something wrong in my
> > > > data format causing that problem.
> > > >
> > > > then, my question is, is there a way in R to track at which line,
> > > > something wrong occurs?
> > > >
> > > > Thanks,
> > > >
> > > > Weiwei
> > > >
> > > >
> > > > --
> > > > Weiwei Shi, Ph.D
> > > >
> > > > "Did you always know?"
> > > > "No, I did not. But I believed..."
> > > > ---Matrix III
> > > >
> > >
> > >
> > > --
> > > Weiwei Shi, Ph.D
> > >
> > > "Did you always know?"
> > > "No, I did not. But I believed..."
> > > ---Matrix III
> > >
> > > __
> > > R-help@stat.math.ethz.ch mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html
> > >
> >
> >
> 
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
> 


-- 
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Gabor Grothendieck
Try reading it into and transposing the matrix afterwards.  Don't know if
that would work but its worth a try.  Actually if you
are having problems read it into a vector, check that its of the required
size, just in case, and then turn it into a matrix and transpose it.


On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> there is another problem since last time i forgot "byrow" :(
> > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529, 
> > ncol=195, byrow=T)
> Read 53338155 items
> Error: cannot allocate vector of size 416704 Kb
> 
> please help with this 'simple' reading task.
> 
> weiwei
> 
> On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > that sort of works for my purpose.
> >
> > btw, is there a bettter way to get data.frame by passing around
> > matrix(). Since I could not find data.frame() with nrow or ncol
> > arguments. so i have to use matrix first and then as.data.frame to
> > convert it.
> >
> > is there any other (better) way?
> >
> > weiwei
> >
> > On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> > >
> > > You could use the nlines= argument to scan to read in a
> > > portion at a time.
> > >
> > >
> > >
> > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > >
> > > > add:
> > > > I used
> > > > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529,
> > > ncol=195)
> > > >
> > > > it is done.
> > > > so it seems that I just have no patience to wait for half an hour :)
> > > >
> > > > but i still have that question:
> > > > is there a way to track the process if it takes too long. Could we
> > > > stop in the middle to see at which line it "hesitates" to move on?
> > > >
> > > > regards,
> > > >
> > > > weiwei
> > > >
> > > >
> > > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > > Hi,
> > > > > I have a question on read.table.
> > > > >
> > > > > I have a dataset with 273,000 lines and 195 columns. I used the
> > > > > read.table to load the data into R:
> > > > > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > > > > I found it takes forever.
> > > > >
> > > > > then I run 1/10 of the data (test) using read.table again. And this
> > > > > time it finished quickly. So, there might be something wrong in my
> > > > > data format causing that problem.
> > > > >
> > > > > then, my question is, is there a way in R to track at which line,
> > > > > something wrong occurs?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Weiwei
> > > > >
> > > > >
> > > > > --
> > > > > Weiwei Shi, Ph.D
> > > > >
> > > > > "Did you always know?"
> > > > > "No, I did not. But I believed..."
> > > > > ---Matrix III
> > > > >
> > > >
> > > >
> > > > --
> > > > Weiwei Shi, Ph.D
> > > >
> > > > "Did you always know?"
> > > > "No, I did not. But I believed..."
> > > > ---Matrix III
> > > >
> > > > __
> > > > R-help@stat.math.ethz.ch mailing list
> > > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > > PLEASE do read the posting guide!
> > > http://www.R-project.org/posting-guide.html
> > > >
> > >
> > >
> >
> >
> > --
> > Weiwei Shi, Ph.D
> >
> > "Did you always know?"
> > "No, I did not. But I believed..."
> > ---Matrix III
> >
> 
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Where's iris?

2005-07-13 Thread Gabor Grothendieck
On 7/13/05, Ruben Roa <[EMAIL PROTECTED]> wrote:
> Hi:
> Where is the iris data set actually
> located in the R 2.1.0 folder (under W XP)?
> Is it a text file or it is a binary file?
> Ruben

Uwe has already explained how to get it in text
form; however, if you are curious about its original
format in R then its actually stored in iris.R as 
R source code which you can view at:

  https://svn.r-project.org/R/trunk/src/library/datasets/data/iris.R

(or download the entire R source and get it from there).

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Weiwei Shi
i think what you meant is
> trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=195, 
> ncol=273529)
and then transpose it. However:
Error: cannot allocate vector of size 512000 Kb

the answer is no :(

I think i am going to write my own function to split the result from
scan but not sure if it can be made into matrix or not even if I
succeed.


On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> Try reading it into and transposing the matrix afterwards.  Don't know if
> that would work but its worth a try.  Actually if you
> are having problems read it into a vector, check that its of the required
> size, just in case, and then turn it into a matrix and transpose it.
> 
> 
> On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > there is another problem since last time i forgot "byrow" :(
> > > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529, 
> > > ncol=195, byrow=T)
> > Read 53338155 items
> > Error: cannot allocate vector of size 416704 Kb
> >
> > please help with this 'simple' reading task.
> >
> > weiwei
> >
> > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > that sort of works for my purpose.
> > >
> > > btw, is there a bettter way to get data.frame by passing around
> > > matrix(). Since I could not find data.frame() with nrow or ncol
> > > arguments. so i have to use matrix first and then as.data.frame to
> > > convert it.
> > >
> > > is there any other (better) way?
> > >
> > > weiwei
> > >
> > > On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> > > >
> > > > You could use the nlines= argument to scan to read in a
> > > > portion at a time.
> > > >
> > > >
> > > >
> > > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > add:
> > > > > I used
> > > > > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529,
> > > > ncol=195)
> > > > >
> > > > > it is done.
> > > > > so it seems that I just have no patience to wait for half an hour :)
> > > > >
> > > > > but i still have that question:
> > > > > is there a way to track the process if it takes too long. Could we
> > > > > stop in the middle to see at which line it "hesitates" to move on?
> > > > >
> > > > > regards,
> > > > >
> > > > > weiwei
> > > > >
> > > > >
> > > > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > > > Hi,
> > > > > > I have a question on read.table.
> > > > > >
> > > > > > I have a dataset with 273,000 lines and 195 columns. I used the
> > > > > > read.table to load the data into R:
> > > > > > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > > > > > I found it takes forever.
> > > > > >
> > > > > > then I run 1/10 of the data (test) using read.table again. And this
> > > > > > time it finished quickly. So, there might be something wrong in my
> > > > > > data format causing that problem.
> > > > > >
> > > > > > then, my question is, is there a way in R to track at which line,
> > > > > > something wrong occurs?
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Weiwei
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Weiwei Shi, Ph.D
> > > > > >
> > > > > > "Did you always know?"
> > > > > > "No, I did not. But I believed..."
> > > > > > ---Matrix III
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Weiwei Shi, Ph.D
> > > > >
> > > > > "Did you always know?"
> > > > > "No, I did not. But I believed..."
> > > > > ---Matrix III
> > > > >
> > > > > __
> > > > > R-help@stat.math.ethz.ch mailing list
> > > > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > > > PLEASE do read the posting guide!
> > > > http://www.R-project.org/posting-guide.html
> > > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > Weiwei Shi, Ph.D
> > >
> > > "Did you always know?"
> > > "No, I did not. But I believed..."
> > > ---Matrix III
> > >
> >
> >
> > --
> > Weiwei Shi, Ph.D
> >
> > "Did you always know?"
> > "No, I did not. But I believed..."
> > ---Matrix III
> >
> 


-- 
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] texture in barplots?

2005-07-13 Thread Peter Dalgaard
Adrian Dusa <[EMAIL PROTECTED]> writes:

> On Wednesday 13 July 2005 17:36, Knut Krueger wrote:
> > Adrian Dusa schrieb:
> > >Is it possible to draw barplots using a texture instead of colors, for a
> > > black and white printer?
> >
> >   barplot(height,.,density=c(4,6,8,10)  ...)
> >
> > for each bar one number - this example is for a barplot with 4 bars.
> >
> > with regards
> > Knut Krueger
> > http://www.biostatistic.de
> 
> Thank you, I read about density but they only seem to draw diagonal lines 
> (differing in the number of lines per inch).
> I am looking for different *types* of texture (i.e. maybe I could reverse the 
> shading lines, or cross-lines or something like that).


This comes up every now and then, and while it seems that everyone
thinks fill patterns would be nice to have, I suspect that every
attempt to actually implement it have gotten killed in infancy. 

The thing that is tricky to design right is the cross-device issues.
Only some devices support this at all, and when they do, the patterns
tend to be device dependent too. Probably not impossible -- there are
other bits of the device drivers that deal with missing capabilities,
like string rotation and clipping -- just, well, tricky.

 
> All the best,
> Adrian
> 
> -- 
> Adrian Dusa
> Arhiva Romana de Date Sociale
> Bd. Schitu Magureanu nr.1
> Tel./Fax: +40 21 3126618 \
>   +40 21 3120210 / int.101

Um... Romania, I suppose? What city?

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] read.table

2005-07-13 Thread Weiwei Shi
Sorry for last post. 
I don't know why i got the error message last time.
but if i did in the following way:
t<-scan('train1.dat',  sep='|', na.string='.')
t2<-matrix(t, nrow=195, ncol=273529)
t3<-t(t2)
t4<-as.data.frame(t3)

now I got what i needed.

Thanks a lot for Gabor's prompt help.

weiwei

On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> i think what you meant is
> > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=195, 
> > ncol=273529)
> and then transpose it. However:
> Error: cannot allocate vector of size 512000 Kb
> 
> the answer is no :(
> 
> I think i am going to write my own function to split the result from
> scan but not sure if it can be made into matrix or not even if I
> succeed.
> 
> 
> On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> > Try reading it into and transposing the matrix afterwards.  Don't know if
> > that would work but its worth a try.  Actually if you
> > are having problems read it into a vector, check that its of the required
> > size, just in case, and then turn it into a matrix and transpose it.
> >
> >
> > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > there is another problem since last time i forgot "byrow" :(
> > > > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), nrow=273529, 
> > > > ncol=195, byrow=T)
> > > Read 53338155 items
> > > Error: cannot allocate vector of size 416704 Kb
> > >
> > > please help with this 'simple' reading task.
> > >
> > > weiwei
> > >
> > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > that sort of works for my purpose.
> > > >
> > > > btw, is there a bettter way to get data.frame by passing around
> > > > matrix(). Since I could not find data.frame() with nrow or ncol
> > > > arguments. so i have to use matrix first and then as.data.frame to
> > > > convert it.
> > > >
> > > > is there any other (better) way?
> > > >
> > > > weiwei
> > > >
> > > > On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > You could use the nlines= argument to scan to read in a
> > > > > portion at a time.
> > > > >
> > > > >
> > > > >
> > > > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > > >
> > > > > > add:
> > > > > > I used
> > > > > > trn<-matrix(scan('train1.dat',  sep='|', na.string='.'), 
> > > > > > nrow=273529,
> > > > > ncol=195)
> > > > > >
> > > > > > it is done.
> > > > > > so it seems that I just have no patience to wait for half an hour :)
> > > > > >
> > > > > > but i still have that question:
> > > > > > is there a way to track the process if it takes too long. Could we
> > > > > > stop in the middle to see at which line it "hesitates" to move on?
> > > > > >
> > > > > > regards,
> > > > > >
> > > > > > weiwei
> > > > > >
> > > > > >
> > > > > > On 7/13/05, Weiwei Shi <[EMAIL PROTECTED]> wrote:
> > > > > > > Hi,
> > > > > > > I have a question on read.table.
> > > > > > >
> > > > > > > I have a dataset with 273,000 lines and 195 columns. I used the
> > > > > > > read.table to load the data into R:
> > > > > > > trn<-read.table('train1.dat', header=F, sep='|', na.strings='.')
> > > > > > > I found it takes forever.
> > > > > > >
> > > > > > > then I run 1/10 of the data (test) using read.table again. And 
> > > > > > > this
> > > > > > > time it finished quickly. So, there might be something wrong in my
> > > > > > > data format causing that problem.
> > > > > > >
> > > > > > > then, my question is, is there a way in R to track at which line,
> > > > > > > something wrong occurs?
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Weiwei
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Weiwei Shi, Ph.D
> > > > > > >
> > > > > > > "Did you always know?"
> > > > > > > "No, I did not. But I believed..."
> > > > > > > ---Matrix III
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Weiwei Shi, Ph.D
> > > > > >
> > > > > > "Did you always know?"
> > > > > > "No, I did not. But I believed..."
> > > > > > ---Matrix III
> > > > > >
> > > > > > __
> > > > > > R-help@stat.math.ethz.ch mailing list
> > > > > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > > > > PLEASE do read the posting guide!
> > > > > http://www.R-project.org/posting-guide.html
> > > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Weiwei Shi, Ph.D
> > > >
> > > > "Did you always know?"
> > > > "No, I did not. But I believed..."
> > > > ---Matrix III
> > > >
> > >
> > >
> > > --
> > > Weiwei Shi, Ph.D
> > >
> > > "Did you always know?"
> > > "No, I did not. But I believed..."
> > > ---Matrix III
> > >
> >
> 
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
> 


-- 
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-

Re: [R] High resolution plots

2005-07-13 Thread Peter Dalgaard
Luis Tercero <[EMAIL PROTECTED]> writes:

> Dear R-help community,
> 
> would any of you have a (preferably simple) example of a 
> presentation-quality .png plot, i.e. one that looks like the .eps plots 
> generated by R?  I am working with R 2.0.1 in WindowsXP and am having 
> similar problems as Knut Krueger in printing high-quality plots.  I have 
> looked at the help file and examples therein as well as others I have 
> been able to find online but to no avail.  After many many tries I have 
> to concede I cannot figure it out.
> 
> I would be very grateful for your help.

What is the real issue here? Import trouble? If you're importing to
Word/PowerPoint, why not use the Windows metafile? Perhaps they are
too ugly compared to EPS by your taste?

Bitmapped formats are a pain to deal with in general. In principle,
you could just crank up the resolution to (say) 600 dpi, but if your
software rescales the image even slightly, things look horrible. 

For web graphics, I found a reasonably working solution by plotting at
a higher resolution than you need, then smoothing the image slightly
and finally rescaling it. This method is archived at
http://tolstoy.newcastle.edu.au/R/help/04/06/1094.html, but it does
need some unixy tools like the pnm toolchain. I wouldn't know if there
are Windows versions of those.



-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Proportion test in three-chices experiment

2005-07-13 Thread Rafael Laboissiere
Hi,

I wish to analyze with R the results of a perception experiment in which
subjects had to recognize each stimulus among three choices (this was a
forced-choice design).  The experiment runs under two different
conditions and the data is like the following:

   N1 : count of trials in condition 1
   p11, p12, p13: proportions of choices 1, 2, and 3 in condition 1
   
   N2 : count of trials in condition 2
   p21, p22, p23: proportions of choices 1, 2, and 3 in condition 2
   
How can I test whether the triple (p11,p12,p13) is different from the
triple (p21,p22,p23)?  Clearly, prop.test does not help me here, because
it relates to two-choices tests.

I apologize if the answer is trivial, but I am relatively new to R and
could not find any pointers in the FAQ or in the mailing list archives.

Thanks in advance for any help,

-- 
Rafael Laboissiere

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] help: how to get the position of a value in a matrix

2005-07-13 Thread wu sz
Hello,

I have a data set matrix of 1200 * 15. How can I get the position of a
specific value in the matrix?

I use "seq(along = x)[x > value]" to look for the position of the
value in the matrix, but "seq" can just find the sequence position row
by row in the matrix, not a real position (like "rowNumber,
colNumber"). Is any function for that?

Thank you,
Shengzhe

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] help: how to get the position of a value in a matrix

2005-07-13 Thread Liaw, Andy
Use which(..., arr.ind=TRUE); e.g.,

> m <- matrix(runif(12), 3, 4)
> which(m > .8, arr.ind=TRUE)
 row col
[1,]   1   3
[2,]   2   3
[3,]   3   3
[4,]   3   4
> m
  [,1]   [,2]  [,3]  [,4]
[1,] 0.2148183 0.08251853 0.9444718 0.4487148
[2,] 0.5386863 0.49673282 0.8054240 0.5101593
[3,] 0.6252847 0.70974516 0.8858951 0.8590655

Andy

> From: wu sz
> 
> Hello,
> 
> I have a data set matrix of 1200 * 15. How can I get the position of a
> specific value in the matrix?
> 
> I use "seq(along = x)[x > value]" to look for the position of the
> value in the matrix, but "seq" can just find the sequence position row
> by row in the matrix, not a real position (like "rowNumber,
> colNumber"). Is any function for that?
> 
> Thank you,
> Shengzhe
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] crossed random fx nlme lme4

2005-07-13 Thread Emilio A. Laca
I need to specify a model similar to this

lme.formula(fixed = sqrt(lbPerAc) ~ y + season + y:season, data = cy,
 random = ~y | observer/set, correlation = corARMA(q = 6))

except that observer and set are actually crossed instead of nested.

observer and set are factors
y and lbPerAc are numeric

If you know how to do it or have suggestions for reading I will be  
grateful.


eal

ps I have already read Pinheiro & Bates, the jan 05 newsletter, and  
several postings.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] help: how to get the position of a value in a matrix

2005-07-13 Thread Simon Blomberg
See ?which Hint: arr.ind=TRUE

Simon.

At 09:28 AM 14/07/2005, wu sz wrote:
>Hello,
>
>I have a data set matrix of 1200 * 15. How can I get the position of a
>specific value in the matrix?
>
>I use "seq(along = x)[x > value]" to look for the position of the
>value in the matrix, but "seq" can just find the sequence position row
>by row in the matrix, not a real position (like "rowNumber,
>colNumber"). Is any function for that?
>
>Thank you,
>Shengzhe
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] stripchart usage and alternatives

2005-07-13 Thread Mike R
r=1:10
u=c("a","a","a","b","b","b","c","d","e","e")
uf = factor(u)
rm = tapply(r, uf, mean)

stripchart(r~u,vertical=TRUE,pch=21)
stripchart(rm~levels(uf),vertical=TRUE,pch=3,add=TRUE)

--
the above code creates a scatter plot of nominal data

are there alternatives to generate the same or similar
"kind" of figure? 


TIA,
Mike

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] crossed random fx nlme lme4

2005-07-13 Thread Simon Blomberg
At 09:35 AM 14/07/2005, Emilio A. Laca wrote:
>I need to specify a model similar to this
>
>lme.formula(fixed = sqrt(lbPerAc) ~ y + season + y:season, data = cy,
>  random = ~y | observer/set, correlation = corARMA(q = 6))
>
>except that observer and set are actually crossed instead of nested.

Does this work for you? (following P&B pp 162-3 and an R-help archive 
search on "crossed random effects")...

fit <- lme(sqrt(lbPerAc) ~ y * season, random=list(pdBlocked(pdIdent(~y), 
pdIdent(observer-1), pdIdent(set-1))), correlation=corARMA(q = 6), data=cy)

lme isn't very well set up for crossed random effects. It's easier in lmer. 
I don't think lmer can handle alternative correlation structures yet, 
though. (Prof. Bates?)

HTH,

Simon.


>observer and set are factors
>y and lbPerAc are numeric
>
>If you know how to do it or have suggestions for reading I will be
>grateful.
>
>
>eal
>
>ps I have already read Pinheiro & Bates, the jan 05 newsletter, and
>several postings.
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Simon Blomberg, B.Sc.(Hons.), Ph.D, M.App.Stat.
Centre for Resource and Environmental Studies
The Australian National University
Canberra ACT 0200
Australia
T: +61 2 6125 7800 email: Simon.Blomberg_at_anu.edu.au
F: +61 2 6125 0757
CRICOS Provider # 00120C

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Efficient testing for +ve definiteness

2005-07-13 Thread Spencer Graves
  My preference is to test see if the smallest eigenvalue is less than 
something like sqrt(.Machine$double.eps) times the largest.  This may be 
too conservative, but if the ratio of the smallest to the largest is 
less than some small number like that, the inverse of such a real 
symmetric matrix will have very large eigenvalue(s) in potentially 
unstable directions.  R may have other functions beside eigen that will 
explicitly consider the symmetry of a matrix, but I'm not familiar with 
them.

  spencer graves

Makram Talih wrote:

> Dear R-users,
> 
> Is there a preferred method for testing whether a real symmetric matrix is
> positive definite? [modulo machine rounding errors.]
> 
> The obvious way of computing eigenvalues via "E <- eigen(A, symmetric=T,
> only.values=T)$values" and returning the result of "!any(E <= 0)" seems
> less efficient than going through the LU decomposition invoked in
> "determinant.matrix(A)" and checking the sign and (log) modulus of the
> determinant.
> 
> I suppose this has to do with the underlying C routines. Any thoughts or
> anecdotes?
> 
> Many Thanks,
> 
> Makram Talih
> 
> --
> Makram Talih, Ph.D.
> Assistant Professor
> Department of Mathematics and Statistics
> Hunter College of the City University of New York
> 695 Park Avenue, Room 905 HE
> New York, NY 10021
> 
> Website: http://stat.hunter.cuny.edu/talih
> E-mail: [EMAIL PROTECTED]
> Tel: 212-772-5308
> Fax: 212-772-4858
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com 
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Memory question

2005-07-13 Thread Spencer Graves
  What kinds of matrices?  There are facilities in the Matrix and 
SparseM packages that might help for sparse matrices.  If they are N x k 
where N is large and k is not, can you compute something like the QR 
decomposition and get away with keeping only the R part for most of your 
matrices?

  One could potentially define a class of matrices that are only kept 
in memory only when needed;  I think S-Plus may do that.  It would take 
a lot of work to make that work generally, but you might be able to 
accomplish what you need with a much smaller effort.

  spencer graves

Kenneth Roy Cabrera Torres wrote:

> Hi R users and developers:
> 
> I want to know how can I save memory in R
> for example:
>   - saving on disk a matrix.
>   - using again the matrix (changing their values)
>   - saving again the matrix on disk in a different file.
> 
> The idea is that I have a process that generate several
> matrices, but if I keep them all in memory it will overflow.
> 
> How can I save them in different files, so I use the same
> amount of memory for each processed matrix?
> 
> Thank you for your help.
> 

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com 
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] anova.lmlist output change

2005-07-13 Thread Ross Darnell
R-colleagues

I have adapted the anova.lmlist function to use the model object name as 
the first column in the output instead of the string "Model n".

If there is general agreement can the change be implemented into the 
stats package?

Regards

Ross Darnell
-- 
University of Queensland, Brisbane QLD 4072 AUSTRALIA
Email: <[EMAIL PROTECTED]>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] convert to chron objects

2005-07-13 Thread Sean O'Riordain
are those dates in m/d/y or d/m/y ?
?chron and watch out for 
format = c(dates = "d/m/y", times = "h:m:s")


On 13/07/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> [I had some emails problems so I am sending this again.  Sorry
> if you get it twice.]
> 
> On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> >
> >
> > On 7/13/05, Young Cho <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > >
> > > I have a column of a dataframe which has time stamps
> > > like:
> > >
> > > > eh$t[1]
> > > [1] 06/05/2005 01:15:25
> > >
> > > and was wondering how to convert it to chron variable.
> > > Thanks a lot.
> >
> >
> >
> >
> >
> Try this:
> 
> # test data frame eh containing a factor variable t
> eh <- data.frame(t = c("06/05/2005 01:15:25", "06/07/2005 01:15:25"))
> 
> # substring converts factor to character and extracts substring
> chron(dates = substring(eh$t, 1, 10), times = substring(eh$t, 12))
> 
> See ?chron for more info.  There is an article on dates in
> R News 4/1 and although it does not specifically answer this
> question it may be useful with chron and also provides a
> reference to more chron info elsewhere.
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] convert to chron objects

2005-07-13 Thread Gabor Grothendieck
An easy to check what you have is to use month.day.year:

> eh <- data.frame(t = c("06/05/2005 01:15:25", "06/07/2005 01:15:25"))
> 
> # substring converts factor to character and extracts substring
> chron(dates = substring(eh$t, 1, 10), times = substring(eh$t, 12))
[1] (06/05/05 01:15:25) (06/07/05 01:15:25)
> month.day.year(.Last.value)
$month
[1] 6 6

$day
[1] 5 7

$year
[1] 2005 2005


On 7/14/05, Sean O'Riordain <[EMAIL PROTECTED]> wrote:
> are those dates in m/d/y or d/m/y ?
> ?chron and watch out for
> format = c(dates = "d/m/y", times = "h:m:s")
> 
> 
> On 13/07/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> > [I had some emails problems so I am sending this again.  Sorry
> > if you get it twice.]
> >
> > On 7/13/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > On 7/13/05, Young Cho <[EMAIL PROTECTED]> wrote:
> > > > Hi,
> > > >
> > > > I have a column of a dataframe which has time stamps
> > > > like:
> > > >
> > > > > eh$t[1]
> > > > [1] 06/05/2005 01:15:25
> > > >
> > > > and was wondering how to convert it to chron variable.
> > > > Thanks a lot.
> > >
> > >
> > >
> > >
> > >
> > Try this:
> >
> > # test data frame eh containing a factor variable t
> > eh <- data.frame(t = c("06/05/2005 01:15:25", "06/07/2005 01:15:25"))
> >
> > # substring converts factor to character and extracts substring
> > chron(dates = substring(eh$t, 1, 10), times = substring(eh$t, 12))
> >
> > See ?chron for more info.  There is an article on dates in
> > R News 4/1 and although it does not specifically answer this
> > question it may be useful with chron and also provides a
> > reference to more chron info elsewhere.
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> >
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


  1   2   >