RE: [R] strptime Usage

2003-11-25 Thread Gabor Grothendieck


strptime takes a character input and produces a POSIXlt output so
the format you specify to strptime is the format of the input, 
not the output:

   format( strptime("10/22/1986", "%m/%d/%Y"), "%Y-%m" )

---
Date: Wed, 26 Nov 2003 13:23:45 +1300 (NZDT) 
From: Ko-Kang Kevin Wang <[EMAIL PROTECTED]>
To: R Help <[EMAIL PROTECTED]> 
Subject: [R] strptime Usage 

 
 
Hi,

I have a column in a dataframe in the form of:
> as.vector(SLDATX[1:20])
[1] "1/6/1986" "1/17/1986" "2/2/1986" "2/4/1986" "2/4/1986"
[6] "2/21/1986" "3/6/1986" "3/25/1986" "4/6/1986" "4/10/1986"
[11] "4/23/1986" "4/30/1986" "5/8/1986" "5/29/1986" "6/15/1986"
[16] "6/18/1986" "6/23/1986" "6/29/1986" "7/16/1986" "7/25/1986"

I'd like to convert it into either -mm or /mm form, e.g. 1986-06 
or 1986/06, and I've been suggsted to use the strptime() function. 

However when I look at the documentation of it and tried something like:
> strptime(as.vector(SLDATX)[1:20], "%y/%m")
[1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA

I got a bunch of NA's. I also tried:
> strptime(as.vector(SLDATX)[1:20], "%y/%m/%d")
[1] "2001-06-19" NA "2002-02-19" "2002-04-19" "2002-04-19"
[6] NA "2003-06-19" NA "2004-06-19" "2004-10-19"
[11] NA NA "2005-08-19" NA NA
[16] NA NA NA NA NA

It is totally messed up.

I'd really appreciate if anyone can point out where I did wrong *_*!

Many thanks in advance.


-- 
Cheers,

Kevin

---
"Try not. Do, do! Or do not. There is no try"
Jedi Master Yoda


Ko-Kang Kevin Wang
Master of Science (MSc) Student
SLC Tutor and Lab Demonstrator
Department of Statistics
University of Auckland
New Zealand
Homepage: http://www.stat.auckland.ac.nz/~kwan022
Ph: 373-7599
x88475 (City)
x88480 (Tamaki)

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Calculating great circle distances

2003-11-25 Thread Duncan Mackay
Have you seen "Online calculations & Downloadable spreadsheets to perform
Geodetic Calculations."
at http://www.ga.gov.au/nmd/geodesy/datums/calcs.jsp ?
Duncan

*
Dr. Duncan Mackay
School of Biological Sciences
Flinders University
GPO Box 2100
Adelaide
S.A.5001
AUSTRALIA

Ph (08) 8201 2627FAX (08) 8201 3015

http://www.scieng.flinders.edu.au/biology/people/mackay_d/index.html


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of
[EMAIL PROTECTED]
Sent: Wednesday, 26 November 2003 2:55 PM
To: [EMAIL PROTECTED]
Subject: [R] Calculating great circle distances


Hi,
Has anyone got any R code (or are there any packages) that calculates
the great circle distance between two geographical (lat, lon) positions?


Cheers

Toby Patterson
Pelagic Ecosystems Research Group
CSIRO Marine Research
Email: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Calculating great circle distances

2003-11-25 Thread Toby.Patterson
Hi, 
Has anyone got any R code (or are there any packages) that calculates
the great circle distance between two geographical (lat, lon) positions?


Cheers 

Toby Patterson 
Pelagic Ecosystems Research Group
CSIRO Marine Research 
Email: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Syntax error from the following command running on Win XP: Rcmd BATCH a:test.r

2003-11-25 Thread john byrne




What is the correct syntax for running a batch program (test.r) from the a:
drive. I have tried no quotes, single and double quotes around a:test.r to
no avail.

Thanks in anticipation.

John Byrne.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Visual Studio 6, GetRNGstate() causes crash

2003-11-25 Thread Amir Soofi
I'm using Visual Studio 6.

I set my project to get headers from C:\Program Files\R\rw1081\src\include and get 
libraries from C:\Program Files\R\rw1081\src\gnuwin32(though i'm not running 
gnuwin at all, just Visual Studio 6)

And I added the rdll.lib from C:\Program Files\R\rw1081\src\gnuwin32\rdll.lib

I tried both the one already in there and the one I made from scratch with lib and 
r.exp, as demonstrated in readme.packages.

My example program compiles and runs, unless I add GetRNGstate() and PutRNGstate() as 
prescribed by the manual.  Then it causes a fatal crash after running, where Microsoft 
asks me to send an error report to them.

Any suggestions?

-Amir

#include "stdafx.h"
#include 
#include 

double exp_rand();

int main(int argc, char* argv[])
{
 //GetRNGstate();
 double mydouble;
 printf("Hello World!\n");
 mydouble = exp_rand();
 printf("value: %f\n", mydouble);
 //PutRNGstate();
 return 0;
}
[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Update to DataLoad on VSN website

2003-11-25 Thread Baird, David
I would like to announce that my DataLoad utility which can be found at:
http://www.vsn-intl.com/genstat/downloads/datald.htm 
has been updated to support R data.frames. DataLoad reads a
large variety of data formats (listed on the web page) and
can convert these to ASCII or XDR data.frames. Only the Windows
and Linux versions are up to date due to the fact that I no longer
has access to a Unix system (the Sparc/Motif version included
in the zip file does not have all the latest features). I'm
currently working with Stefano Iacus on developing a Mac OSX
version. This is not open source software, due to the fact that 
its development has been paid for by the GenStat user community, 
and VSN, the distributor of GenStat, would like to maintain its 
rights to the source. In the spirit of co-operation VSN have 
allowed me to distribute the compiled executable, and those R users
who are not against free gifts are welcome to download it.
I hope this is of use to the R community.

I would like to thank Gabor Grothendieck for many helpful suggestions
that are likely to make the utility more flexible and useful.

I am open to anyone who has troubles or bugs in converting particular
files in one of the supported formats emailing me to work out a
solution. 

Best wishes,
David
_
Dr David Baird, Biometrician  EMail:  [EMAIL PROTECTED]
Mail: AgResearch, PO Box 60, Gerald St, Lincoln, NEW ZEALAND
Phone: +64 3 983 3975   Fax: +64 3 983 3946
===
Attention: The information contained in this message and/or ...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] nlm

2003-11-25 Thread Amir Soofi
Can I use nlm through the R API from C?

It's an internal, so I believe it's not supported yet.

If not, any suggestions on a workaround.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] strptime Usage

2003-11-25 Thread Jason Turner
Ko-Kang Kevin Wang wrote:

Hi,

I have a column in a dataframe in the form of:

as.vector(SLDATX[1:20])
 [1] "1/6/1986"  "1/17/1986" "2/2/1986"  "2/4/1986"  "2/4/1986"
 [6] "2/21/1986" "3/6/1986"  "3/25/1986" "4/6/1986"  "4/10/1986"
[11] "4/23/1986" "4/30/1986" "5/8/1986"  "5/29/1986" "6/15/1986"
[16] "6/18/1986" "6/23/1986" "6/29/1986" "7/16/1986" "7/25/1986"
...

First, you have to make this character vector into a time object.
You want something like:
times <- strptime(as.vector(SLDATX[1:20]),"%d/%m/%Y")

so R knows what format you're using for dates.

From there, format(times,"%Y/%m") will work.

Subtle trap - strptime produces a list of 9 vectors; "times" will always 
have length 9.  If you want to include this into a data frame, you'll 
need to convert to a  POSIX time type:

as.POSIXct(times)

to get the right length.

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] strptime Usage

2003-11-25 Thread Ko-Kang Kevin Wang
Hi,

I have a column in a dataframe in the form of:
> as.vector(SLDATX[1:20])
 [1] "1/6/1986"  "1/17/1986" "2/2/1986"  "2/4/1986"  "2/4/1986"
 [6] "2/21/1986" "3/6/1986"  "3/25/1986" "4/6/1986"  "4/10/1986"
[11] "4/23/1986" "4/30/1986" "5/8/1986"  "5/29/1986" "6/15/1986"
[16] "6/18/1986" "6/23/1986" "6/29/1986" "7/16/1986" "7/25/1986"

I'd like to convert it into either -mm or /mm form, e.g. 1986-06 
or 1986/06, and I've been suggsted to use the strptime() function.  

However when I look at the documentation of it and tried something like:
> strptime(as.vector(SLDATX)[1:20], "%y/%m")
 [1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA

I got a bunch of NA's.  I also tried:
> strptime(as.vector(SLDATX)[1:20], "%y/%m/%d")
 [1] "2001-06-19" NA   "2002-02-19" "2002-04-19" "2002-04-19"
 [6] NA   "2003-06-19" NA   "2004-06-19" "2004-10-19"
[11] NA   NA   "2005-08-19" NA   NA
[16] NA   NA   NA   NA   NA

It is totally messed up.

I'd really appreciate if anyone can point out where I did wrong *_*!

Many thanks in advance.


-- 
Cheers,

Kevin

---
"Try not.  Do, do!  Or do not.  There is no try"
   Jedi Master Yoda


Ko-Kang Kevin Wang
Master of Science (MSc) Student
SLC Tutor and Lab Demonstrator
Department of Statistics
University of Auckland
New Zealand
Homepage: http://www.stat.auckland.ac.nz/~kwan022
Ph: 373-7599
x88475 (City)
x88480 (Tamaki)

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] weighted mean

2003-11-25 Thread Andrew C. Ward
Dear Marc,

For the weighted mean, one possible solution is as follows
and will hopefully give you the general idea:

tmp <- data.frame(x=sample(1:5, 100, replace=TRUE), 
  y=sample(1:100, 100, replace=TRUE),
  w=runif(100))
lapply(split(tmp[, 2:3], tmp[, "x"]),
   function(x) { weighted.mean(x=x$y, w=x$w)})


Regards,

Andrew C. Ward

CAPE Centre
Department of Chemical Engineering
The University of Queensland
Brisbane Qld 4072 Australia


Quoting [EMAIL PROTECTED]:

> How do I go about generating a WEIGHTED mean (and
> standard error) of a
> variable (e.g., expenditures) for each level of a
> categorical variable
> (e.g., geographic region)?  I'm looking for something
> comparable to PROC
> MEANS in SAS with both a class and weight statement.
> 
>  
> 
> Thanks.
> 
>  
> 
> Marc
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] weighted mean

2003-11-25 Thread Jason Turner
[EMAIL PROTECTED] wrote:

How do I go about generating a WEIGHTED mean (and standard error) of a
variable (e.g., expenditures) for each level of a categorical variable
(e.g., geographic region)?  I'm looking for something comparable to PROC
MEANS in SAS with both a class and weight statement.
That's two questions.
1) to apply a weighted mean to a vector, see ?weighted.mean
2) to apply a function to data grouped by categorical variable, you 
probably need "by" or "tapply".  See the help pages and examples for both.

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] problem plotting curve through data

2003-11-25 Thread Douglas Bates
Christian Reilly <[EMAIL PROTECTED]> writes:

> I'm having trouble plotting a curve (basically a dose-response
> function) through a set of data.
> 
> I have created a dataframe (df) of Stimulus Intensities (xstim) and
> Normalized Responses (yresp), and I've used nls() to calculate a nonlinear
> regression, like so:
> 
> 
> 
> > f <- yresp ~ xstim^n / (xstim^n + B^n)
> > starts <- list(n=.6,  B=11)
> > myfit <- nls(formula=f, data=df, start = starts)
> >myfit
> 
> Nonlinear regression model
>   model:  yresp ~ xstim^n/(xstim^n + B^n)
>data:  df
> n B
> 0.8476233 5.7943791
>  residual sum-of-squares:  0.03913122
> >
> ---
> 
> Which seems great, but I'd like to be able to plot this curve through my
> data to see how the fit looks.
> 
> when I try:
> 
> 
> > plot(f,data=df)
> Error in terms.formula(formula, data = data) :
>   invalid power in formula
> --
> 
> or
> 
> --
> > n <- 0.87
> > B <- 5.7
> > plot(f,data=df)
> Error in terms.formula(formula, data = data) :
>   invalid power in formula
> 

Use predict.nls to get the fitted response curve.  See
?predict.nls
and especially the example, which you can run with
example(predict.nls)

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] plot mean + S.E. over time

2003-11-25 Thread Duncan Mackay
check out "plotCI" and "plotmeans" in the gregmisc library.
Duncan

*
Dr. Duncan Mackay
School of Biological Sciences
Flinders University
GPO Box 2100
Adelaide
S.A.5001
AUSTRALIA

Ph (08) 8201 2627FAX (08) 8201 3015

http://www.scieng.flinders.edu.au/biology/people/mackay_d/index.html


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Jan Wantia
Sent: Wednesday, 26 November 2003 12:34 AM
To: [EMAIL PROTECTED]
Subject: [R] plot mean + S.E. over time


Hi, there!

I finally became a disciple of 'R', after having lost years of my life
handling data with a popular, rather wide-spread spreadsheet-software.

Now I want to plot the results of many runs of my simulation over time,
so that the means +/- Standard error are on the y-axis, and time on the
x-axis.

I have tried 'boxplot', with timesteps as the grouping variable, but did
not manage to replace quartils by S.E.
Then, with 'plot' I do not know how to handle the data of 100 runs for a
given time to produce the mean and S.E.

Are there any suggestions? Any help would be appreciated!

Cheers, Jan

--

__

Jan Wantia
Dept. of Information Technology, University of Zürich
Andreasstr. 15
CH 8050 Zürich
Switzerland

Tel.: +41 (0) 1 635 4315
Fax: +41 (0) 1 635 45 07
email: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] hist plot and custom "band" width

2003-11-25 Thread Jason Turner
Mathieu Drapeau wrote:

Hi,
I have some difficulties to figure how to set a range to my histogram 
bands.
I have values that are [0,50] and they appear once in my list. How 
can I do a histogram that include all the values between a range of 
1 together? [0,1],[10001,2],[21,3], ...
?hist
see the "breaks" argument.
Cheers

Jason

--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] weighted mean

2003-11-25 Thread MZodet
How do I go about generating a WEIGHTED mean (and standard error) of a
variable (e.g., expenditures) for each level of a categorical variable
(e.g., geographic region)?  I'm looking for something comparable to PROC
MEANS in SAS with both a class and weight statement.

 

Thanks.

 

Marc

 

 

 

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] hist plot and custom "band" width

2003-11-25 Thread Mathieu Drapeau
Hi,
I have some difficulties to figure how to set a range to my histogram bands.
I have values that are [0,50] and they appear once in my list. How 
can I do a histogram that include all the values between a range of 
1 together? [0,1],[10001,2],[21,3], ...

Thanks,
Mathieu
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] problem plotting curve through data

2003-11-25 Thread Christian Reilly

I'm having trouble plotting a curve (basically a dose-response
function) through a set of data.

I have created a dataframe (df) of Stimulus Intensities (xstim) and
Normalized Responses (yresp), and I've used nls() to calculate a nonlinear
regression, like so:



> f <- yresp ~ xstim^n / (xstim^n + B^n)
> starts <- list(n=.6,  B=11)
> myfit <- nls(formula=f, data=df, start = starts)
>myfit

Nonlinear regression model
  model:  yresp ~ xstim^n/(xstim^n + B^n)
   data:  df
n B
0.8476233 5.7943791
 residual sum-of-squares:  0.03913122
>
---

Which seems great, but I'd like to be able to plot this curve through my
data to see how the fit looks.

when I try:


> plot(f,data=df)
Error in terms.formula(formula, data = data) :
invalid power in formula
--

or

--
> n <- 0.87
> B <- 5.7
> plot(f,data=df)
Error in terms.formula(formula, data = data) :
invalid power in formula


If anyone can give me any tips on this "Error in terms.formula" message,
or how functions are plotted over data, I'd be much obliged. I've only
been at about a R, some no explanations or suggestions are too simplistic.
Even an example of a plot of a logistic growth function of the form

R = S^n / ( S^n + S50^n)

where R is response, S is stimulus intensity and S50 is the intensity that
produces a 50% response

would be tremendously helpful



Cheers,

and many thanks,

Christian

---
Christian Reillyhttp://www.stanford.edu/~heron
Hopkins Marine Station
Pacific Grove, CA 93950
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Persistent state of R

2003-11-25 Thread Joe Conway
michael watson (IAH-C) wrote:
I am trying to make my cgi scripts quicker and it turns out that the
bottle-neck is the loading of the libraries into R - for example
loading up marrayPlots into R takes 10-20 seconds, which although not
long, is long enough for users to imagine it is not working and start
clicking reload
So I just wondered if anyone had a neat solution whereby I could
somehow have the required libraries permanently loaded into R -
perhaps I need a persistent R process with the libraries in memory
that I can pipe commands to?  Is this possible?
If you are processing data already stored in a database, you could use 
Postgres and PL/R. See:
  http://www.joeconway.com/

Use Postgres 7.4 and preload PL/R for the best performance -- i.e put 
the following line in $PGDATA/postgresql.conf
preload_libraries = '$libdir/plr:plr_init'

HTH,

Joe

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Re: Raqua.dmg on MacOS X Panther? - works fine now

2003-11-25 Thread cstrato
Dear MacR users

Sorry for the earlier mail, now everything works really great.
For some reason I could not start R immediately after installation,
I had to log out first and then login again, then R did start from
a really great R Console.
Best regards
Christian
cstrato wrote:

Dear MacR users

During the weekend I did a clean install of MacOSX 10.3.1, of Apples X11,
of Apples development tools, and of the basic Fink 0.6.2 package.
Now I have just downloaded Raqua.dmg from CRAN and installed the
RAqua, libreadline and tcltk packages.
Sorrowly, double-clicking on StartR has no effect, and starting R from 
the
command line does not find R, even if I start R from /usr/local/bin.

Does the Raqua binary work with Panther?
(Why do I need to install tcltk, when it comes installed with Panther?)
Thank you in advance
Best regards
Christian
_._._._._._._._._._._._._._._._
C.h.i.s.t.i.a.n S.t.r.a.t.o.w.a
V.i.e.n.n.a   A.u.s.t.r.i.a
_._._._._._._._._._._._._._._._

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: 64-bit R on Opteron [was Re: [R] Windows R 1.8.0 hangs when M em Usage >1.8GB]

2003-11-25 Thread Liaw, Andy
> From: Douglas Bates [mailto:[EMAIL PROTECTED]
> 
> "Liaw, Andy" <[EMAIL PROTECTED]> writes:
> 
> > Sorry.  I need to retract my claim.  There seems to be a 3G 
> limit, even
> > though the OS could handle nearly 8G.  (I can have two 
> simultaneous R
> > processes each using near 3G.)
> > 
> > On another note, on our dual Opteron box R (compiled as 
> 64-bit) could easily
> > use nearly all the 16G in that box (that's one of the 
> reason for having that
> > box).
> 
> Does "could" mean you have verified that it did or is this a
> theoretical statement?  I.e., have you compiled and tested R on your
> dual Opteron?

Given my questionable memory of things, this question is very fair.
Here's the evidence:

> x <- matrix(0, 5e5, 5e5)
> x2 <- matrix(0, 5e5, 5e5)
> gc()
 used(Mb) gc trigger(Mb)
Ncells 41337922.1 74110839.6
Vcells 1783900405 13610.1 1784288128 13613.1

Best,
Andy

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Raqua.dmg on MacOS X Panther?

2003-11-25 Thread cstrato
Dear MacR users

During the weekend I did a clean install of MacOSX 10.3.1, of Apples X11,
of Apples development tools, and of the basic Fink 0.6.2 package.
Now I have just downloaded Raqua.dmg from CRAN and installed the
RAqua, libreadline and tcltk packages.
Sorrowly, double-clicking on StartR has no effect, and starting R from the
command line does not find R, even if I start R from /usr/local/bin.
Does the Raqua binary work with Panther?
(Why do I need to install tcltk, when it comes installed with Panther?)
Thank you in advance
Best regards
Christian
_._._._._._._._._._._._._._._._
C.h.i.s.t.i.a.n S.t.r.a.t.o.w.a
V.i.e.n.n.a   A.u.s.t.r.i.a
_._._._._._._._._._._._._._._._
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] plotting to postscript: how to control line width ?

2003-11-25 Thread ryszard . czerminski
When I use
plot(..., type = "line")  then ldw parameter makes a difference...

Because of large number of points overlapping I simply han an impression
before that I am getting thick line...

R





Ryszard Czerminski/PH/[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
11/25/2003 01:34 PM

 
To: [EMAIL PROTECTED]
cc: 
Subject:[R] plotting to postscript: how to control line width ?


How to control line width ?

if I do:

> postscript("IC50-density.eps", width = 4.0, height = 3.0, horizontal = 
FALSE, onefile = FALSE, paper = "special", title = "IC50 distribution")
> plot(d$x, d$y, xlab = "-log10(IC50)", ylab = "density")
> lines(d$x, d$y, lwd = 0.1)
> dev.off()

but whatever value I give for ldw parameter (e.g. 0.1 or 10) I am getting 
the same line width ?!

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Windows R 1.8.0 hangs when Mem Usage >1.8GB

2003-11-25 Thread Liaw, Andy
Sorry.  I need to retract my claim.  There seems to be a 3G limit, even
though the OS could handle nearly 8G.  (I can have two simultaneous R
processes each using near 3G.)

On another note, on our dual Opteron box R (compiled as 64-bit) could easily
use nearly all the 16G in that box (that's one of the reason for having that
box).

Cheers,
Andy

> From: Paul Gilbert [mailto:[EMAIL PROTECTED] 
> 
> Liaw, Andy wrote:
> > With a custom compiled kernel, I've run R processes that 
> used more than 5GB
> > of RAM on a Linux box with 8GB RAM and dual Xeons.  So it 
> seems to work on
> > 32-bit Linux with big memory kernel.
> > 
> > Andy
> 
> I'm curious about this. I believe the address space limit of a 32-bit 
> processor is 4G, and I thought Xeons were 32-bit processors. 
> How can a 
> single process exceed the address space?
> 
> Thanks,
> Paul Gilbert
> > 
> > 
> > 
> >>From: Duncan Murdoch [mailto:[EMAIL PROTECTED] 
> > 
> > [snip] 
> > 
> >>Normally the maximum memory allowed for any process in 
> >>Windows is 2 GB.  It's possible to raise that to 3 GB but R 
> >>1.8 doesn't know how, so that's an absolute upper limit.  
> >>Version 1.9 may be able to go up to 3 GB, but beyond that 
> >>you'll probably need a 64 bit processor:  as far as I know 
> >>all the 32 bit OS's limit each process to 2 or 3 GB, because 
> >>they reserve 1 or 2 GB for themselves.
> >>
> > 
> > [snip]
> >  
> > 
> >>Duncan Murdoch
> >>
> >>__
> >>[EMAIL PROTECTED] mailing list 
> >>https://www.stat.math.ethz.ch/mailman/listinfo> /r-help
> >>
> > 
> > 
> > __
> > [EMAIL PROTECTED] mailing list
> > https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> > 
> 
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] plotting to postscript: how to control line width ?

2003-11-25 Thread ryszard . czerminski
How to control line width ?

if I do:

> postscript("IC50-density.eps", width = 4.0, height = 3.0, horizontal = 
FALSE, onefile = FALSE, paper = "special", title = "IC50 distribution")
> plot(d$x, d$y, xlab = "-log10(IC50)", ylab = "density")
> lines(d$x, d$y, lwd = 0.1)
> dev.off()

but whatever value I give for ldw parameter (e.g. 0.1 or 10) I am getting 
the same line width ?!

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Questions on Random Forest

2003-11-25 Thread Liaw, Andy
It's not clear to me what you want to do, but if I understand your problem
somewhat, I don't see how randomForest would be relevant.

Sounds like you are doing the following:

o  Read in a 512x512 image with pixel intensities.
o  You somehow fit a 3-component normal mixture model to the intensity data,
and have labels for which component the pixels belong to.
o  You want to be able to "fit" (or "predict") other images to the
3-component mixture model you have; i.e., create the "label" data given an
image.

If that's about right, I don't see why you would need some learning
algorithm such as randomForest.  You should be able to compute the
likelihood that a pixel belong to each of the 3 components in the mixture
model, based on the fitted parameters of that model.  The simplest I can
think of, ignoring the mixing proportions, is to simply compute the absolute
Z scores of a pixel with respect to the three components: z1 =
abs((x-u1)/sigma1), z2 = abs((x-u2)/sigma2), z2 = abs((x-u3)/sigma3), and
assign the pixel to the component with the largest absolute z-score.

HTH,
Andy

> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Fucang Jia
> 
> Hi, everyone,
> 
> I am a newbie on R. Now I want to do image pixel 
> classification by random 
> forest. But I has not a clear understanding on random forest. 
> Here is some 
> question:
> 
> As for an image, for example its size is 512x512 and has only 
> one variable 
> -- gray level. The histogram of the image looks like mixture 
> Gaussian Model, 
> say Gauss distribution (u1,sigma1), (u2,sigma2),(u3,sigma3). 
> And a image 
> classified by K-means or EM algorithm, so the class label 
> image is also 
> 512x512 and has 0, 1, 2 value.
> 
> I read the binary image data as follows:
> 
> datafile <- file("bone.img","rb")
> img <- readBin(datafile,size=2,what="integer",n=512*512,signed=FALSE)
> img <- as.matrix(img)
> close(datafile)
> 
> labelfile <- file(label.img","rb")
> label <- 
> readBin(labelfile,size=2,what="integer",n=512*512,signed=FALSE)
> label <- as.matrix(label)
> close(labelfile)
> 
> img_and_label <- c(img,label)  // binds the image data and class label
> img_and_label <- as.matrix(img_and_label)
> img_and_label <- array(img_and_label, dim=c(262144,2))
> 
> 
> Random Forest need a class label like "Species" in the  iris. 
> I do not know 
> how
> to set a class label like "Species" to the img.  So I run the 
> command as 
> follows:
> 
> set.seed(166)
> rf <- 
> randomForest(img_and_label[,2],data=image_and_label,importance=TRUE,
> proximity=TRUE)
> 
> which outputs:
> 
> Error in if (n == 0) stop("data (x) has 0 rows") :
> argument is of length zero
> 
> Could anyone tell what is wrong and how can do the RF?
> 
> Secondly, if there is an new image , say img3 (dimension is 
> 512x512,too), 
> how can I
> use the former result to classifify the new image?
> 
> Thirdly, whether or not random forest be used well if there 
> is only one 
> variable, say pixel
> gray level, or three variables, such as red, green, blue 
> color component to 
> an true color
> image?
> 
> Thank you very much!
> 
> Best,
> 
> Fucang
> 
> 
> Fucang Jia, Ph.D student
> Institute of Computing Technology, Chinese Academy of Sciences
> Post.Box 2704
> Beijing, 100080
> P.R.China
> E-mail:[EMAIL PROTECTED]
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Questions on Random Forest

2003-11-25 Thread Wiener, Matthew
It looks like image_and_label has only 2 columns, so when you take
img_and_label[,2] you have a vector left.  Even if that weren't the case,
you're going to need to pass in both the gray scale points and labels,
presumably in a data frame.  You've created a character matrix below, so
you're just passing in a character vector of labels.

You'll probably want something like 
rf <- randomForest(label~image,data=image_and_label,importance=TRUE,
proximity=TRUE),

assuming that image_and_label is a data frame with elements image and label.


For the second question, see the documentation for the predict method for
random forests; for the third, the answer is yes, random forests can be used
with multiple variables.

There is an introduction to the random forests package in volume 2, issue 3
of the R newsletter (available in the documentation section of cran).

Hope this helps,

Matt Wiener

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Fucang Jia
Sent: Monday, November 24, 2003 10:31 AM
To: [EMAIL PROTECTED]
Subject: [R] Questions on Random Forest


Hi, everyone,

I am a newbie on R. Now I want to do image pixel classification by random 
forest. But I has not a clear understanding on random forest. Here is some 
question:

As for an image, for example its size is 512x512 and has only one variable 
-- gray level. The histogram of the image looks like mixture Gaussian Model,

say Gauss distribution (u1,sigma1), (u2,sigma2),(u3,sigma3). And a image 
classified by K-means or EM algorithm, so the class label image is also 
512x512 and has 0, 1, 2 value.

I read the binary image data as follows:

datafile <- file("bone.img","rb")
img <- readBin(datafile,size=2,what="integer",n=512*512,signed=FALSE)
img <- as.matrix(img)
close(datafile)

labelfile <- file(label.img","rb")
label <- readBin(labelfile,size=2,what="integer",n=512*512,signed=FALSE)
label <- as.matrix(label)
close(labelfile)

img_and_label <- c(img,label)  // binds the image data and class label
img_and_label <- as.matrix(img_and_label)
img_and_label <- array(img_and_label, dim=c(262144,2))


Random Forest need a class label like "Species" in the  iris. I do not know 
how
to set a class label like "Species" to the img.  So I run the command as 
follows:

set.seed(166)
rf <- randomForest(img_and_label[,2],data=image_and_label,importance=TRUE,
proximity=TRUE)

which outputs:

Error in if (n == 0) stop("data (x) has 0 rows") :
argument is of length zero

Could anyone tell what is wrong and how can do the RF?

Secondly, if there is an new image , say img3 (dimension is 512x512,too), 
how can I
use the former result to classifify the new image?

Thirdly, whether or not random forest be used well if there is only one 
variable, say pixel
gray level, or three variables, such as red, green, blue color component to 
an true color
image?

Thank you very much!

Best,

Fucang


Fucang Jia, Ph.D student
Institute of Computing Technology, Chinese Academy of Sciences
Post.Box 2704
Beijing, 100080
P.R.China
E-mail:[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] using pdMAT in the lme function?

2003-11-25 Thread Douglas Bates
"Bill Shipley" <[EMAIL PROTECTED]> writes:

> Hello.  I want to specify a diagonal structure for the covariance matrix
> of random effects in the lme() function.
> 
> Here is the call before I specify a diagonal structure:
> > fit2<-lme(Ln.rgr~I(Ln.nar-log(0.0011)),data=meta.analysis,
> 
> + random=~1+I(Ln.nar-log(0.0011)|STUDY.CODE,na.action=na.omit)
> 
>  
> 
> and this works fine.  Now, I want to fix the covariance between the
> between-groups slopes and intercepts to zero.  I try do do this using
> the pdDiag command as follows, but it does not work:
> 
>  
> 
> > fit2<-lme(Ln.rgr~I(Ln.nar-log(0.0011)),data=meta.analysis,
> 
> +
> random=pdDiag(diag(2),~1+I(Ln.nar-log(0.0011))|STUDY.CODE),na.action=na.
> omit)

Try random=list(STUDY.CODE=pdDiag(~1+I(Ln.nar-log(0.0011

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Persistent state of R

2003-11-25 Thread Luke Tierney
On Tue, 25 Nov 2003, michael watson (IAH-C) wrote:

> Hi
> 
> I am using R as a back-end to some CGI scripts, written in Perl.  My platform is 
> Suse Linux 8.2, Apache 1.3.7.  So the CGI script takes some form parameters, opens a 
> pipe to an R process, loads up some Bioconductor libraries, executes some R commands 
> and takes the ouput and creates a web page.  It is all very neat and works well.
> 
> I am trying to make my cgi scripts quicker and it turns out that the bottle-neck is 
> the loading of the libraries into R - for example loading up marrayPlots into R 
> takes 10-20 seconds, which although not long, is long enough for users to imagine it 
> is not working and start clicking reload
> 
> So I just wondered if anyone had a neat solution whereby I could somehow have the 
> required libraries permanently loaded into R - perhaps I need a persistent R process 
> with the libraries in memory that I can pipe commands to?  Is this possible?
> 
> Thanks
> Mick

One option we have been experimenting with is to process the contents
of packages into a simple data base and then use "lazy loading".  This
means that loading a package will load only a small amount of
information, basically the names of the variables defined.  The actual
values are only loaded from the data base on demand.  An experimental
package that implements this is available at

http://www.stat.uiowa.edu/~luke/R/serialize/lazyload.tar.gz

I believe `make check-all' passes with all base and recommended
packages set up for lazy loading.  Lazy laoding has not been tested
much with packages using S4 methods or with anything in Bioconductor
as far as I know.  So this may or may not do anything useful for you.

[WARNING: Since this messes with the installed packages in your R
system you should only experiment with it in an R installation you can
afford to mess up.]

The README file from the package is attached below.

Best,

luke

(README)---
This package provides tools to set up packages for lazy loading from a
data base.  If you want to try this out, here are the steps:

1) Install the current version of package lazyload from
http://www.stat.uiowa.edu/~luke/R/serialize/.

2) To make base use lazy loading, start R with something like

env R_DEFAULT_PACKAGES=NULL R

to make sure no packages are loaded.  Then do

source(file.path(.find.package("lazyload"),"makebasedb.R"))
library(lazyload)
makeLazyLoading("base")

Make sure to do the source first, then the library call.

3) To make package foo use lazy loading use makeLazyLoading("foo").
You can make all base packages use lazy loading with

for (p in rev(installed.packages(priority="base")[,"Package"])) {
cat(paste("converting", p, "..."))
makeLazyLoading(p)
cat("done\n")
}

The rev() is a quick and dirty way to get stepfun done before modreg,
since modreg imports from stepfun.


-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Parameter estimation in nls

2003-11-25 Thread apjaworski

Your starting values for the parameters are no even in the general
ballpark.  Here is what I got for the final fit:

Parameters:
Estimate Std. Error t value Pr(>|t|)
a  3.806e+07  1.732e+06  21.971   <2e-16 ***
k -3.391e-02  6.903e-02  -0.4910.628
b  9.000e-01  1.240e-02  72.612   <2e-16 ***

As you can see only b has the same order of magnitude as your starting b
and a is different by 7 orders!

In general, an nls procedure needs starting values that are "close".  The
same is true for any general nonlinear optimization algorithm.  Only in
special circumstances a nonlinear algorithm will converge from any starting
point - one I know of is the convexity of the objective function.

I do not think there is any nls program that will find starting values
automatically for an arbitrary nonlinear model.  It is possible for
specific models, but is very much model dependent.  Your model, for
example, can be easily linearized by taking logs of both sides, i.e.

mm <- lm(log(y) ~ x + I(log(x)))

Then doing

aa <- exp(coef(mm)[1])
bb <- exp(coef(mm)[2])
kk <- coef(mm)[3]
mm1 <- nls(y ~ a * x^k * b^x, start=list(a=aa,k=kk,b=bb))

results in convergence with no problems in 7 iterations.

Hope this helps,

Andy

__
Andy Jaworski
518-1-01
Process Laboratory
3M Corporate Research Laboratory
-
E-mail: [EMAIL PROTECTED]
Tel:  (651) 733-6092
Fax:  (651) 736-3122


|-+>
| |   Dr Andrew Wilson |
| |   <[EMAIL PROTECTED]|
| |   .uk> |
| |   Sent by: |
| |   [EMAIL PROTECTED]|
| |   ath.ethz.ch  |
| ||
| ||
| |   11/25/2003 03:09 |
| ||
|-+>
  
>-|
  |
 |
  |  To:   [EMAIL PROTECTED]   
  |
  |  cc:   
 |
  |  Subject:  [R] Parameter estimation in nls 
 |
  
>-|




I am trying to fit a rank-frequency distribution with 3 unknowns (a, b
and k) to a set of data.

This is my data set:

y <- c(37047647,27083970,23944887,22536157,20133224,
20088720,18774883,18415648,17103717,13580739,12350767,
8682289,7496355,7248810,7022120,6396495,6262477,6005496,
5065887,4594147,2853307,2745322,454572,448397,275136,268771)

and this is the fit I'm trying to do:

nlsfit <- nls(y ~ a * x^k * b^x, start=list(a=5,k=1,b=3))

(It's a Yule distribution.)

However, I keep getting:

"Error in nls(y ~ a * x^k * b^x, start = list(a = 5, k = 1, b = 3)) :
singular gradient"

I guess this has something to do with the parameter start values.

I was wondering, is there a fully automated way of estimating parameters
which doesn't need start values close to the final estimates?  I know
other programs do it, so is it possible in R?

Thanks,
Andrew Wilson

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] plot mean + S.E. over time

2003-11-25 Thread Frank E Harrell Jr
On 25 Nov 2003 15:03:49 +0100
"Jan Wantia" <[EMAIL PROTECTED]> wrote:

> Hi, there!
> 
> I finally became a disciple of 'R', after having lost years of my life 
> handling data with a popular, rather wide-spread spreadsheet-software.
> 
> Now I want to plot the results of many runs of my simulation over time, 
> so that the means +/- Standard error are on the y-axis, and time on the 
> x-axis.
> 
> I have tried 'boxplot', with timesteps as the grouping variable, but did
> 
> not manage to replace quartils by S.E.
> Then, with 'plot' I do not know how to handle the data of 100 runs for a
> 
> given time to produce the mean and S.E.
> 
> Are there any suggestions? Any help would be appreciated!
> 
> Cheers, Jan
> 
> -- 
> 
> __
> 
> Jan Wantia
> Dept. of Information Technology, University of Zürich
> Andreasstr. 15
> CH 8050 Zürich
> Switzerland
> 

You might look at the xYplot function in the Hmisc package.
---
Frank E Harrell JrProfessor and ChairSchool of Medicine
  Department of BiostatisticsVanderbilt University

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R comparison

2003-11-25 Thread Timur Elzhov
On Sun, Nov 23, 2003 at 11:38:05AM +, Florian Roedhammer wrote:

>I would need information concerning the features of R such as
>static/dynamic typing, method overlapping, object oriontation,
>
>My question is whether you know  any site where such features are
>discussed on or if you could help me directly.
Have you had a look at http://cran.r-project.org/doc/manuals/R-lang.pdf ?

--
WBR,
Timur

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R recursion depth and stack size

2003-11-25 Thread Martin Maechler
> "Pascal" == Pascal A Niklaus <[EMAIL PROTECTED]>
> on Tue, 25 Nov 2003 16:10:56 +0100 writes:

Pascal> Hi all, I am playing around with latin squares, and
Pascal> wrote a recursive function that searches for valid
Pascal> combinations.  Apart from the fact that there are
Pascal> very many, I run into troubles beginning with size
Pascal> 10x10 because the recursion depth becomes too large
Pascal> (max of 10x9-1=89 in this case).

Pascal> Why is this a problem? Isn't there enough space
Pascal> allocated to the stack?  Can this be increased? The
Pascal> memory demand shouldn't be terrible, with only
Pascal> minimal local variables (only set and the function
Pascal> params r,c,t - s is local to a block called only
Pascal> once when a solution is found). Even if variables
Pascal> aren't stored efficiently, a recursion depth of 100
Pascal> shouldn't consume more than a couple of kilobytes.

Pascal> Is this a fundamental misunderstanding of the way R
Pascal> works?

a slight one, at least: The recursion depth is limited by
options(expressions = ...), i.e.  getOption("expressions")  which
is 500 by default.

We've had similar problem when drawing a somewhat large dendrogram
(of less than 1 end nodes still).

I think we should consider increasing the *default* maximal
recursion depth (from 500 to a few thousands) and
even think about increasing the maximally allowed value for
'expressions' (which is 10).

Martin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] using pdMAT in the lme function?

2003-11-25 Thread Bill Shipley
Hello.  I want to specify a diagonal structure for the covariance matrix
of random effects in the lme() function.

Here is the call before I specify a diagonal structure:

 

> fit2<-lme(Ln.rgr~I(Ln.nar-log(0.0011)),data=meta.analysis,

+ random=~1+I(Ln.nar-log(0.0011)|STUDY.CODE,na.action=na.omit)

 

and this works fine.  Now, I want to fix the covariance between the
between-groups slopes and intercepts to zero.  I try do do this using
the pdDiag command as follows, but it does not work:

 

> fit2<-lme(Ln.rgr~I(Ln.nar-log(0.0011)),data=meta.analysis,

+
random=pdDiag(diag(2),~1+I(Ln.nar-log(0.0011))|STUDY.CODE),na.action=na.
omit)

 

I get back an error saying that I have zero degrees of freedom.
Clearly, the syntax of the command is wrong but I can’t figure out why.
The data set (meta.analysis) is not defined as a groupedData object.

 

Any help is appreciated.

 

Bill Shipley

Associate Editor, Ecology

North American Editor, Annals of Botany

Département de biologie, Université de Sherbrooke,

Sherbrooke (Québec) J1K 2R1 CANADA

[EMAIL PROTECTED]

 
http://callisto.si.usherb.ca:8080/bshipley/

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R recursion depth and stack size

2003-11-25 Thread Prof Brian Ripley
Perhaps you could take the trouble to read the error message, which is

   Error in inherits(x, "factor") : evaluation is nested too deeply: 
   infinite recursion?

The evaluation depth is controlled by options(expressions=).  Increasing
it allows your code to run, albeit very slowly.

On Tue, 25 Nov 2003, Pascal A. Niklaus wrote:

> Hi all,
> 
> I am playing around with latin squares, and wrote a recursive function 
> that searches for valid combinations.
> Apart from the fact that there are very many, I run into troubles 
> beginning with size 10x10 because the recursion depth becomes too large 
> (max of 10x9-1=89 in this case).
> 
> Why is this a problem? Isn't there enough space allocated to the stack? 

It is space for expressions.  This is a safety check to avoid infinite
recursion overrunning the C-level stack and crashing R (and thereby losing
all the current work).  There is no portable way to check the C stack size
and usage that we know of.

> Can this be increased? The memory demand shouldn't be terrible, with 
> only minimal local variables (only set and the function params r,c,t - s 
> is local to a block called only once when a solution is found). Even if 
> variables aren't stored efficiently, a recursion depth of 100 shouldn't 
> consume more than a couple of kilobytes.

Why `shouldn't' it?  Have you any idea of the storage requirements of R 
objects?

> Is this a fundamental misunderstanding of the way R works?

It is a misreading of a simple message, for sure.

[...]

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Parameter estimation in nls

2003-11-25 Thread Spencer Graves
	  If the numbers are letter frequencies, I would suggest Poisson 
regression using "glm"  the default link is logarithms, and that should 
work quite well for you.

	  hope this helps.  spencer graves

###
Very many thanks for your help.
>> What do these numbers represent?

They are letter frequencies arranged in rank order.  (A very big sample
that I got off the web for testing, but my own data - rank frequencies of
various linguistic entities, including letter frequencies - are likely to
be similar.)
Basically, I am testing the goodness of fit of three or four equations:

- the one I posted (Yule's equation)
- Zipf's equation (y = a * b^x, if I remember rightly, but the paper's at
home, so I may be wrong...)
- a parameter-free equation
Regards,
Andrew Wilson

 Since x <- 1:26 and your y's are all positive, your model,
ignoring the error term, is mathematically isomorphic to the following:
x <- 1:26
(fit <- lm(y~x+log(x)))
Call:
lm(formula = y ~ x + log(x))
Coefficients:
(Intercept)x   log(x)
  35802074  -371008 -8222922
 With reasonable starting values, I would expect "a" to converge to
roughly exp(35802074), "k" to (-8222922), and "b" to exp(-371008).  With
values of these magnitudes for "a" and "b", the "nls" optimization is
highly ill conditioned.
 What do these numbers represent?  By using "nls" you are assuming
implicitly the following:
 y = a*x^k*b^x + e, where the e's are independent normal errors
with mean 0 and constant variance.
 Meanwhile, the linear model I fit above assumes a different noise
model:
 log(y) = log(a) + k*log(x) + x*log(b) + e, where these e's are
also independent normal, mean 0, constant variance.
 If you have no preference for one noise model over the other, I
suggest you use the linear model I estimated, declare victory and worry
about something else.  If you insist on estimating the multiplicative
model, you should start by dividing y by some number like 1e6 or 1e7 and
consider reparameterizing the problem if that is not adequate.  Have you
consulted a good book on nonlinear regression?  The two references cited
in "?nls" are both excellent:
  Bates, D.M. and Watts, D.G. (1988) _Nonlinear Regression Analysis
and Its Applications_, Wiley
Bates, D. M. and Chambers, J. M. (1992) _Nonlinear models._
Chapter 10 of _Statistical Models in S_ eds J. M. Chambers and T.
J. Hastie, Wadsworth & Brooks/Cole.
hope this helps.  spencer graves

Dr Andrew Wilson wrote:

I am trying to fit a rank-frequency distribution with 3 unknowns (a, b
and k) to a set of data.
This is my data set:

y <- c(37047647,27083970,23944887,22536157,20133224,
20088720,18774883,18415648,17103717,13580739,12350767,
8682289,7496355,7248810,7022120,6396495,6262477,6005496,
5065887,4594147,2853307,2745322,454572,448397,275136,268771)
and this is the fit I'm trying to do:

nlsfit <- nls(y ~ a * x^k * b^x, start=list(a=5,k=1,b=3))

(It's a Yule distribution.)

However, I keep getting:

"Error in nls(y ~ a * x^k * b^x, start = list(a = 5, k = 1, b = 3)) : 
singular gradient"

I guess this has something to do with the parameter start values.

I was wondering, is there a fully automated way of estimating parameters
which doesn't need start values close to the final estimates?  I know
other programs do it, so is it possible in R?
Thanks,
Andrew Wilson
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Parameter estimation in nls

2003-11-25 Thread Spencer Graves
 Since x <- 1:26 and your y's are all positive, your model, 
ignoring the error term, is mathematically isomorphic to the following: 

> x <- 1:26
> (fit <- lm(y~x+log(x)))
Call:
lm(formula = y ~ x + log(x))
Coefficients:
(Intercept)x   log(x) 
  35802074  -371008 -8222922 

 With reasonable starting values, I would expect "a" to converge to 
roughly exp(35802074), "k" to (-8222922), and "b" to exp(-371008).  With 
values of these magnitudes for "a" and "b", the "nls" optimization is 
highly ill conditioned. 

 What do these numbers represent?  By using "nls" you are assuming 
implicitly the following: 

 y = a*x^k*b^x + e, where the e's are independent normal errors 
with mean 0 and constant variance. 

 Meanwhile, the linear model I fit above assumes a different noise 
model: 

 log(y) = log(a) + k*log(x) + x*log(b) + e, where these e's are 
also independent normal, mean 0, constant variance. 

 If you have no preference for one noise model over the other, I 
suggest you use the linear model I estimated, declare victory and worry 
about something else.  If you insist on estimating the multiplicative 
model, you should start by dividing y by some number like 1e6 or 1e7 and 
consider reparameterizing the problem if that is not adequate.  Have you 
consulted a good book on nonlinear regression?  The two references cited 
in "?nls" are both excellent: 

  Bates, D.M. and Watts, D.G. (1988) _Nonlinear Regression Analysis
and Its Applications_, Wiley
Bates, D. M. and Chambers, J. M. (1992) _Nonlinear models._
Chapter 10 of _Statistical Models in S_ eds J. M. Chambers and T.
J. Hastie, Wadsworth & Brooks/Cole.
hope this helps.  spencer graves

Dr Andrew Wilson wrote:

I am trying to fit a rank-frequency distribution with 3 unknowns (a, b
and k) to a set of data.
This is my data set:

y <- c(37047647,27083970,23944887,22536157,20133224,
20088720,18774883,18415648,17103717,13580739,12350767,
8682289,7496355,7248810,7022120,6396495,6262477,6005496,
5065887,4594147,2853307,2745322,454572,448397,275136,268771)
and this is the fit I'm trying to do:

nlsfit <- nls(y ~ a * x^k * b^x, start=list(a=5,k=1,b=3))

(It's a Yule distribution.)

However, I keep getting:

"Error in nls(y ~ a * x^k * b^x, start = list(a = 5, k = 1, b = 3)) : 
singular gradient"

I guess this has something to do with the parameter start values.

I was wondering, is there a fully automated way of estimating parameters
which doesn't need start values close to the final estimates?  I know
other programs do it, so is it possible in R?
Thanks,
Andrew Wilson
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] best editor for .R files

2003-11-25 Thread Arne Henningsen
Hi Angel and * !

I also use kate and in my opinion it is very convenient. I don't know which 
features you don't like, but if you mean some syntax highlighting "features", 
you might try my R syntax highlighting (XML) file for kate. It is based on 
Egon Willighagen's one and contains a few bug fixes and several extensions. 
You can download it from my homepage (http://www.uni-kiel.de/agrarpol/
ahenningsen/, the link is at the bottom of the page). 
This new file is NOT thoroughly tested so far. After testing it for some time, 
I will send it to CRAN and to kate.kde.org as an update of Egon's file, since 
Egon stopped maintaining it. 

Best wishes,
Arne


On Thursday 20 November 2003 22:27, Angel wrote:
> Which is the best editor for .R files?
>
> I currently use kate on my linux as it has R highlighting and allows me to
> split the window into two: in one I edit the .R file and in the other I
> have a shell so I run R and can easily  copy and paste the code. There are
> some features that I don't like and I am having a look on some
> alternatives. I've heard wonders of emacs with ess but I am a little bit
> frightened of the steep learning curve.
>
> What do the R experts use or would recommend using?
> Both linux and/or windows alternatives are welcomed.
> I guess it would much depend on the particular needs/preferences of each
> user but I would like to know which are the most commonly used editors.
> Thanks,
> Angel
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help

-- 
Arne Henningsen
Department of Agricultural Economics
University of Kiel
Olshausenstr. 40
D-24098 Kiel (Germany)
Tel: +49-431-880 4445
Fax: +49-431-880 1397
[EMAIL PROTECTED]
http://www.uni-kiel.de/agrarpol/ahenningsen/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] plot mean + S.E. over time

2003-11-25 Thread Emmanuel Paradis
See the function plotCI() in package gregmisc on CRAN.

EP

At 15:03 25/11/2003 +0100, vous avez écrit:
Hi, there!

I finally became a disciple of 'R', after having lost years of my life 
handling data with a popular, rather wide-spread spreadsheet-software.

Now I want to plot the results of many runs of my simulation over time, so 
that the means +/- Standard error are on the y-axis, and time on the x-axis.

I have tried 'boxplot', with timesteps as the grouping variable, but did 
not manage to replace quartils by S.E.
Then, with 'plot' I do not know how to handle the data of 100 runs for a 
given time to produce the mean and S.E.

Are there any suggestions? Any help would be appreciated!

Cheers, Jan

--

__

Jan Wantia
Dept. of Information Technology, University of Zürich
Andreasstr. 15
CH 8050 Zürich
Switzerland
Tel.: +41 (0) 1 635 4315
Fax: +41 (0) 1 635 45 07
email: [EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Persistent state of R

2003-11-25 Thread Warnes, Gregory R

Starting up R and loading libraries can be very time consuming.  For my
RSOAP system (http://www.analytics.washington.edu/Zope/projects/RSOAP/)  I
took the step of pre-starting the R process, including the loading of some
libraries, and then handing work of to the pre-started process.  You should
be able to use RSOAP from perl, and it would be a simple change to have it
add the bioconductor packages to the pre-loaded set.

Alternatively, I suppose that one could force R to dump core and then start
it from the core image...

-G

-Original Message-
From: michael watson (IAH-C)
To: '[EMAIL PROTECTED]'
Sent: 11/25/03 8:54 AM
Subject: [R] Persistent state of R

Hi

I am using R as a back-end to some CGI scripts, written in Perl.  My
platform is Suse Linux 8.2, Apache 1.3.7.  So the CGI script takes some
form parameters, opens a pipe to an R process, loads up some
Bioconductor libraries, executes some R commands and takes the ouput and
creates a web page.  It is all very neat and works well.

I am trying to make my cgi scripts quicker and it turns out that the
bottle-neck is the loading of the libraries into R - for example loading
up marrayPlots into R takes 10-20 seconds, which although not long, is
long enough for users to imagine it is not working and start clicking
reload

So I just wondered if anyone had a neat solution whereby I could somehow
have the required libraries permanently loaded into R - perhaps I need a
persistent R process with the libraries in memory that I can pipe
commands to?  Is this possible?

Thanks
Mick

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


LEGAL NOTICE\ Unless expressly stated otherwise, this messag...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Windows R 1.8.0 hangs when Mem Usage >1.8GB

2003-11-25 Thread Liaw, Andy
With a custom compiled kernel, I've run R processes that used more than 5GB
of RAM on a Linux box with 8GB RAM and dual Xeons.  So it seems to work on
32-bit Linux with big memory kernel.

Andy


> From: Duncan Murdoch [mailto:[EMAIL PROTECTED] 
[snip] 
> Normally the maximum memory allowed for any process in 
> Windows is 2 GB.  It's possible to raise that to 3 GB but R 
> 1.8 doesn't know how, so that's an absolute upper limit.  
> Version 1.9 may be able to go up to 3 GB, but beyond that 
> you'll probably need a 64 bit processor:  as far as I know 
> all the 32 bit OS's limit each process to 2 or 3 GB, because 
> they reserve 1 or 2 GB for themselves.
>
[snip]
 
> Duncan Murdoch
> 
> __
> [EMAIL PROTECTED] mailing list 
> https://www.stat.math.ethz.ch/mailman/listinfo> /r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] R recursion depth and stack size

2003-11-25 Thread Pascal A. Niklaus
Hi all,

I am playing around with latin squares, and wrote a recursive function 
that searches for valid combinations.
Apart from the fact that there are very many, I run into troubles 
beginning with size 10x10 because the recursion depth becomes too large 
(max of 10x9-1=89 in this case).

Why is this a problem? Isn't there enough space allocated to the stack? 
Can this be increased? The memory demand shouldn't be terrible, with 
only minimal local variables (only set and the function params r,c,t - s 
is local to a block called only once when a solution is found). Even if 
variables aren't stored efficiently, a recursion depth of 100 shouldn't 
consume more than a couple of kilobytes.

Is this a fundamental misunderstanding of the way R works?

Pascal

BTW: Is there a way to pass variables "by reference" in function calls?

--

The function stripped-down to the essential looks like this:

latin.square <- function(t = 4)
{
   latinCheck <- function(r,c,t)
   {
   set <- setdiff(LETTERS[1:t],c(m[r,],m[,c]));
   for(i in set)
   {
   m[r,c] <<- i;
   if(c
   latinSolutions <<- character(0);
   fullset <<- LETTERS[1:t];
   m <<- matrix(nrow=t,ncol=t);
   m[1,] <<- LETTERS[1:t];
   latinCheck(2,1,t)
}
l

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] 1.8.1. RMySQL Win2K Writing-Table Problem?

2003-11-25 Thread David James
Hi,

I think this is caused by changes to subscripting for data.frames 
made in 1.8.0 (grep "Subscripting" in the 1.8.0 NEWS file).  
I'll fix this in an RMySQL update I'm currently working on
(and soon to be available).

Regards,
--
David

Christian Schulz wrote:
> Hi,
> 
> i getting following error and don't know doing
> something wrong.
> 
> >mysqlWriteTable(con,"model1",model1,overwrite=T)
> Error in "[.data.frame"(value, from:to, drop = FALSE) :
> undefined columns selected
> In addition: Warning message:
> drop argument will be ignored in: "[.data.frame"(value, from:to, drop =
> FALSE)
> 
> Exactly the same code, database and data
> works with 1.7.1 very nice!
> 
> Many thanks,
> Christian
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help

-- 
David A. James
Statistics Research, Room 2C-253Phone:  (908) 582-3082   
Bell Labs, Lucent Technologies  Fax:(908) 582-3340
Murray Hill, NJ 09794-0636

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] RMySQL valid field names

2003-11-25 Thread David James
Luis,

Thanks for your thoughtful comments.  Indeed you've uncovered
a bug/problem in that there's no way for users to control the
"allow.keywords=" argument in calls to dbWriteTable() -- this
needs to be fixed.  Regarding the default value for allow.keywords,
I'm not sure it is wise to set it to TRUE by default (despite the 
fact that MySQL does not explicitly prohibit keywords as column names)
since problems can arise when MySQL runs in ANSI mode (see section
6.1.7 in the manual).  Perhaps the default value should depend
on whether MySQL is running in ANSI mode or not, I'll check whether
that's easy to determine from R at runtime.

Again, thanks for reporting the problem.

--
David

Luis Torgo wrote:
> I'm having some problems with valid field names when using RMySQL to interface 
> R (version 1.7.0, under RedHat9.0), to MySQL (4.1.0-alpha). I think I've 
> spotted the problem and a solution (which is working for me), but I wanted to 
> share this with you as I may be missing something.
> (Note: I'm aware that this is an old R version, but I've checked the code of 
> the lastest version of the RMySQL package at CRAN, and my comments still 
> apply).
> 
> I have a data frame which has among others three columns with names 
> ('DateTime', 'Open' and 'Close'). When I use dbWriteTable to dump the data 
> frame into a MySQL database everything works fine except the names of these 
> three columns which are slightly different (e.g. 'DateTime_9'). 
> 
> This only occurs with these 3 columns because their names are reserved words 
> of MySQL. The change of names is occurring in the function mysqlWriteTable, 
> namely in the call:
> ...
> names(field.types) <- make.db.names(con,names(field.types),allow.keywords=F)
> ...
> If I change the parameter allow.keywords into T, everything works fine for me. 
> Given that MySQL allows all characters to be part of field names (section 
> 6.1.2 of MySQL reference manual), I do not understand what is the reason for 
> calling make.db.names with this value of parameter allow.keywords.
> 
> In resume, my question is: Is there any reason for this that I am missing, or 
> the change I've done is pretty safe and could possibly be done in future 
> versions of the package?
> 
> Thank you for any help.
> Luis
> 
> -- 
> Luis Torgo
> FEP/LIACC, University of Porto   Phone : (+351) 22 607 88 30
> Machine Learning Group   Fax   : (+351) 22 600 36 54
> R. Campo Alegre, 823 email : [EMAIL PROTECTED]
> 4150 PORTO   -  PORTUGAL WWW   : http://www.liacc.up.pt/~ltorgo
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help

-- 
David A. James
Statistics Research, Room 2C-253Phone:  (908) 582-3082   
Bell Labs, Lucent Technologies  Fax:(908) 582-3340
Murray Hill, NJ 09794-0636

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] plot map of areas

2003-11-25 Thread Roger Koenker
or tripack perhaps...


url:www.econ.uiuc.edu/~roger/my.htmlRoger Koenker
email   [EMAIL PROTECTED]   Department of Economics
vox:217-333-4558University of Illinois
fax:217-244-6678Champaign, IL 61820

On Fri, 21 Nov 2003, Liaw, Andy wrote:

> Is the contributed package `deldir' on CRAN what you're looking for?
>
> HTH,
> andy
>
> > From: Pascal A. Niklaus [mailto:[EMAIL PROTECTED]
> > Hi all,
> >
> > Given a number of points (x,y) in a plane, I'd like to plot a map of
> > polygons, so that
> >
> > 1) each polygon contains exactly one point
> > 2) the polygon defines the area for which this specific point is
> > closer than any other point.
> >
> > It's a bit like a map of areas "influenced" by that point, and it's
> > obviously a matter of intersecting the perpendicular
> > bisectors between
> > adjacent points.
> >
> > I believe this type of map has a name, but I can't remember how it's
> > called.
> >
> > Is there a function somewhere in a R package that may do this?
> >
> > Thanks for your help
> >
> > Pascal
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] plot map of areas

2003-11-25 Thread Liaw, Andy
Is the contributed package `deldir' on CRAN what you're looking for?

HTH,
andy

> From: Pascal A. Niklaus [mailto:[EMAIL PROTECTED] 
> Hi all,
> 
> Given a number of points (x,y) in a plane, I'd like to plot a map of 
> polygons, so that
> 
> 1) each polygon contains exactly one point
> 2) the polygon defines the area for which this specific point is 
> closer than any other point.
> 
> It's a bit like a map of areas "influenced" by that point, and it's 
> obviously a matter of intersecting the perpendicular 
> bisectors between 
> adjacent points.
> 
> I believe this type of map has a name, but I can't remember how it's 
> called.
> 
> Is there a function somewhere in a R package that may do this?
> 
> Thanks for your help
> 
> Pascal

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] RandomForest & memory demand

2003-11-25 Thread Liaw, Andy
> From: Christian Schulz
> 
> Hi,
> 
> is it correct that i need  ~ 2GB RAM that it's
> possible to work with the default setting 
> ntree=500 and a data.frame with 100.000 rows 
> and max. 10 columns for training and testing?

If you have the test set, and don't need the forest for predicting other
data, you can give both training data and test data to randomForest() at the
same time (if that fits in memory).  This way there will only be one tree
kept in memory.  E.g., you would do something like:

my.result <- randomForest(x, y, xtest)

Then my.result$test will contain a list of results on the test set.  If you
also give ytest, there will be a bit more output.

If you follow Torsten's suggestion, you can use the combine() function to
merge the five forests into one.
 
> P.S.
> It's possible calculate approximate the
> memory demand for different settings with RF?

The current implementation of the code requires (assuming classification, no
test data, and proximity=FALSE) approximately:

At R level:
- One copy of the training data.
- 6*(2n+1)*ntree integers for storing the forest.

At C level (dynamically allocated):
- (2n + 37)*nclass + 9*n + p*(2+nclass) doubles.
- 5 + (3*p + 22)*n + 5*(p + nclass) integers.

(nclass is the number of classes, n the number of cases in training data, p
the number of variables.)

HTH,
Andy
 
> Many thanks & regards,
> Christian

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] plot mean + S.E. over time

2003-11-25 Thread Jan Wantia
Hi, there!

I finally became a disciple of 'R', after having lost years of my life 
handling data with a popular, rather wide-spread spreadsheet-software.

Now I want to plot the results of many runs of my simulation over time, 
so that the means +/- Standard error are on the y-axis, and time on the 
x-axis.

I have tried 'boxplot', with timesteps as the grouping variable, but did 
not manage to replace quartils by S.E.
Then, with 'plot' I do not know how to handle the data of 100 runs for a 
given time to produce the mean and S.E.

Are there any suggestions? Any help would be appreciated!

Cheers, Jan

--

__

Jan Wantia
Dept. of Information Technology, University of Zürich
Andreasstr. 15
CH 8050 Zürich
Switzerland
Tel.: +41 (0) 1 635 4315
Fax: +41 (0) 1 635 45 07
email: [EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Persistent state of R

2003-11-25 Thread michael watson (IAH-C)
Hi

I am using R as a back-end to some CGI scripts, written in Perl.  My platform is Suse 
Linux 8.2, Apache 1.3.7.  So the CGI script takes some form parameters, opens a pipe 
to an R process, loads up some Bioconductor libraries, executes some R commands and 
takes the ouput and creates a web page.  It is all very neat and works well.

I am trying to make my cgi scripts quicker and it turns out that the bottle-neck is 
the loading of the libraries into R - for example loading up marrayPlots into R takes 
10-20 seconds, which although not long, is long enough for users to imagine it is not 
working and start clicking reload

So I just wondered if anyone had a neat solution whereby I could somehow have the 
required libraries permanently loaded into R - perhaps I need a persistent R process 
with the libraries in memory that I can pipe commands to?  Is this possible?

Thanks
Mick

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Something broken with update?

2003-11-25 Thread Roger Bivand
On Tue, 25 Nov 2003, [EMAIL PROTECTED] wrote:

I think you need to tell is specifically which platform you are using 
(output of version), and whether you installed R from source or a binary 
distribution. Most likely you are on a Unix or Linux platform, and 
installed a binary distribution, but are now updating *source* recommended 
packages. 


> Updating my 1.8.0 R installation (>update.packages() ) I obtain the following
> (SORRY FOR THE LENGTH OF THE LOG BUT IT HELPS!!!):
> 
> 
> downloaded 135Kb
> 
> KernSmooth :
>  Version 2.22-11 in /usr/lib/R/library 
>  Version 2.22-12 on CRAN
> Update (y/N)?  y
> mgcv :
>  Version 0.9-3.1 in /usr/lib/R/library 
>  Version 0.9-6 on CRAN
> Update (y/N)?  y
> trying URL `http://cran.r-project.org/src/contrib/KernSmooth_2.22-12.tar.gz'
> Content type `application/x-tar' length 24752 bytes
> opened URL
> .. .. 
> downloaded 24Kb
> 
> trying URL `http://cran.r-project.org/src/contrib/mgcv_0.9-6.tar.gz'
> Content type `application/x-tar' length 181022 bytes
> opened URL
> .. .. .. .. ..
> .. .. .. .. ..
> .. .. .. .. ..
> .. .. ..
> downloaded 176Kb
> 
> * Installing *source* package 'KernSmooth' ...
> ** libs
> g77 -mieee-fp  -fPIC  -g -O2 -c blkest.f -o blkest.o
> g77 -mieee-fp  -fPIC  -g -O2 -c cp.f -o cp.o
> g77 -mieee-fp  -fPIC  -g -O2 -c dgedi.f -o dgedi.o
> g77 -mieee-fp  -fPIC  -g -O2 -c dgefa.f -o dgefa.o
> g77 -mieee-fp  -fPIC  -g -O2 -c dgesl.f -o dgesl.o
> g77 -mieee-fp  -fPIC  -g -O2 -c linbin2D.f -o linbin2D.o
> g77 -mieee-fp  -fPIC  -g -O2 -c linbin.f -o linbin.o
> g77 -mieee-fp  -fPIC  -g -O2 -c locpoly.f -o locpoly.o
> g77 -mieee-fp  -fPIC  -g -O2 -c rlbin.f -o rlbin.o
> g77 -mieee-fp  -fPIC  -g -O2 -c sdiag.f -o sdiag.o
> g77 -mieee-fp  -fPIC  -g -O2 -c sstdiag.f -o sstdiag.o
> gcc -shared  -o KernSmooth.so blkest.o cp.o dgedi.o dgefa.o dgesl.o linbin2D.o
> linbin.o locpoly.o rlbin.o sdiag.o sstdiag.o -lf77blas -latlas
> -L/usr/lib/gcc-lib/i486-linux/3.3.2 -L/usr/lib/gcc-lib/i486-linux/3.3.2/../../..
> -lfrtbegin -lg2c-pic -lm -lgcc_s -L/usr/lib/R/bin -lR
> /usr/bin/ld: cannot find -lf77blas
> collect2: ld returned 1 exit status
> make: *** [KernSmooth.so] Error 1
> ERROR: compilation failed for package 'KernSmooth'
> ** Removing '/usr/lib/R/library/KernSmooth'
> ** Restoring previous '/usr/lib/R/library/KernSmooth'
> * Installing *source* package 'mgcv' ...
> ** libs
> gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c gcv.c
> -o gcv.o
> gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
> magic.c -o magic.o
> gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c mat.c
> -o mat.o
> gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
> matrix.c -o matrix.o
> gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
> mgcv.c -o mgcv.o
> gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c qp.c
> -o qp.o
> gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
> tprs.c -o tprs.o
> gcc -shared  -o mgcv.so gcv.o magic.o mat.o matrix.o mgcv.o qp.o tprs.o
> -L/usr/lib/R/bin -lRlapack -lf77blas -latlas -L/usr/lib/gcc-lib/i486-linux/3.3.2
> -L/usr/lib/gcc-lib/i486-linux/3.3.2/../../.. -lfrtbegin -lg2c-pic -lm -lgcc_s 
> -L/usr/lib/R/bin -lR
> /usr/bin/ld: cannot find -lf77blas
> collect2: ld returned 1 exit status
> make: *** [mgcv.so] Error 1
> ERROR: compilation failed for package 'mgcv'
> ** Removing '/usr/lib/R/library/mgcv'
> ** Restoring previous '/usr/lib/R/library/mgcv'
> 
> Delete downloaded files (y/N)? y
> 
> Warning messages: 
> 1: Installation of package KernSmooth had non-zero exit status in:
> install.packages(update[, "Package"], instlib, contriburl = contriburl,  
> 2: Installation of package mgcv had non-zero exit status in:
> install.packages(update[, "Package"], instlib, contriburl = contriburl,  
> -
> 
> What's going wrong with these two packages?
> 
> Please help
> 
> Ciao
> Vittorio
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Breiviksveien 40, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 93 93
e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Something broken with update?

2003-11-25 Thread [EMAIL PROTECTED]
Updating my 1.8.0 R installation (>update.packages() ) I obtain the following
(SORRY FOR THE LENGTH OF THE LOG BUT IT HELPS!!!):


downloaded 135Kb

KernSmooth :
 Version 2.22-11 in /usr/lib/R/library 
 Version 2.22-12 on CRAN
Update (y/N)?  y
mgcv :
 Version 0.9-3.1 in /usr/lib/R/library 
 Version 0.9-6 on CRAN
Update (y/N)?  y
trying URL `http://cran.r-project.org/src/contrib/KernSmooth_2.22-12.tar.gz'
Content type `application/x-tar' length 24752 bytes
opened URL
.. .. 
downloaded 24Kb

trying URL `http://cran.r-project.org/src/contrib/mgcv_0.9-6.tar.gz'
Content type `application/x-tar' length 181022 bytes
opened URL
.. .. .. .. ..
.. .. .. .. ..
.. .. .. .. ..
.. .. ..
downloaded 176Kb

* Installing *source* package 'KernSmooth' ...
** libs
g77 -mieee-fp  -fPIC  -g -O2 -c blkest.f -o blkest.o
g77 -mieee-fp  -fPIC  -g -O2 -c cp.f -o cp.o
g77 -mieee-fp  -fPIC  -g -O2 -c dgedi.f -o dgedi.o
g77 -mieee-fp  -fPIC  -g -O2 -c dgefa.f -o dgefa.o
g77 -mieee-fp  -fPIC  -g -O2 -c dgesl.f -o dgesl.o
g77 -mieee-fp  -fPIC  -g -O2 -c linbin2D.f -o linbin2D.o
g77 -mieee-fp  -fPIC  -g -O2 -c linbin.f -o linbin.o
g77 -mieee-fp  -fPIC  -g -O2 -c locpoly.f -o locpoly.o
g77 -mieee-fp  -fPIC  -g -O2 -c rlbin.f -o rlbin.o
g77 -mieee-fp  -fPIC  -g -O2 -c sdiag.f -o sdiag.o
g77 -mieee-fp  -fPIC  -g -O2 -c sstdiag.f -o sstdiag.o
gcc -shared  -o KernSmooth.so blkest.o cp.o dgedi.o dgefa.o dgesl.o linbin2D.o
linbin.o locpoly.o rlbin.o sdiag.o sstdiag.o -lf77blas -latlas
-L/usr/lib/gcc-lib/i486-linux/3.3.2 -L/usr/lib/gcc-lib/i486-linux/3.3.2/../../..
-lfrtbegin -lg2c-pic -lm -lgcc_s -L/usr/lib/R/bin -lR
/usr/bin/ld: cannot find -lf77blas
collect2: ld returned 1 exit status
make: *** [KernSmooth.so] Error 1
ERROR: compilation failed for package 'KernSmooth'
** Removing '/usr/lib/R/library/KernSmooth'
** Restoring previous '/usr/lib/R/library/KernSmooth'
* Installing *source* package 'mgcv' ...
** libs
gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c gcv.c
-o gcv.o
gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
magic.c -o magic.o
gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c mat.c
-o mat.o
gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
matrix.c -o matrix.o
gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
mgcv.c -o mgcv.o
gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c qp.c
-o qp.o
gcc -I/usr/lib/R/include   -D__NO_MATH_INLINES -mieee-fp  -fPIC  -g -O2 -c
tprs.c -o tprs.o
gcc -shared  -o mgcv.so gcv.o magic.o mat.o matrix.o mgcv.o qp.o tprs.o
-L/usr/lib/R/bin -lRlapack -lf77blas -latlas -L/usr/lib/gcc-lib/i486-linux/3.3.2
-L/usr/lib/gcc-lib/i486-linux/3.3.2/../../.. -lfrtbegin -lg2c-pic -lm -lgcc_s 
-L/usr/lib/R/bin -lR
/usr/bin/ld: cannot find -lf77blas
collect2: ld returned 1 exit status
make: *** [mgcv.so] Error 1
ERROR: compilation failed for package 'mgcv'
** Removing '/usr/lib/R/library/mgcv'
** Restoring previous '/usr/lib/R/library/mgcv'

Delete downloaded files (y/N)? y

Warning messages: 
1: Installation of package KernSmooth had non-zero exit status in:
install.packages(update[, "Package"], instlib, contriburl = contriburl,  
2: Installation of package mgcv had non-zero exit status in:
install.packages(update[, "Package"], instlib, contriburl = contriburl,  
-

What's going wrong with these two packages?

Please help

Ciao
Vittorio

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Coxian-Distribution

2003-11-25 Thread Tobias Lüdiger
Hello R-Users!

Perhaps you can help me!

I am a newbie at R and I am searching for a propability to realize the 
Coxian-Distribution.

http://www.owlnet.rice.edu/~elec428/handouts/Coxian.pdf


Thanks in advance

Tobias




[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] RandomForest & memory demand

2003-11-25 Thread Torsten Hothorn

> Hi,
>
> is it correct that i need  ~ 2GB RAM that it's
> possible to work with the default setting
> ntree=500 and a data.frame with 100.000 rows
> and max. 10 columns for training and testing?
>

no. You may parallelize the computations: perform 5 runs of RF with `ntree
= 100' (or less) and save the resulting RF-objects into files.

For prediction, calculate the prediction of each of the 5 objects and
aggregate them. This requires some simple lines of code but will help to
circumvent RAM restrictions.

Best,

Torsten

> P.S.
> It's possible calculate approximate the
> memory demand for different settings with RF?
>
> Many thanks & regards,
> Christian
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Lambert's W function

2003-11-25 Thread Robin Hankin
Hello List

does anyone have an R function for the Lambert W function?  I need 
complex arguments.

[the Lamert W function W(z) satisfies

W(z)*exp(W(z)) = z

but I could'nt even figure out how to use uniroot() for complex z]





--
Robin Hankin
Uncertainty Analyst
Southampton Oceanography Centre
SO14 3ZH
tel +44(0)23-8059-7743
[EMAIL PROTECTED] (edit in obvious way; spam precaution)
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] RandomForest & memory demand

2003-11-25 Thread Christian Schulz
Hi,

is it correct that i need  ~ 2GB RAM that it's
possible to work with the default setting 
ntree=500 and a data.frame with 100.000 rows 
and max. 10 columns for training and testing?

P.S.
It's possible calculate approximate the
memory demand for different settings with RF?

Many thanks & regards,
Christian

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] 1.8.1. RMySQL Win2K Writing-Table Problem?

2003-11-25 Thread Christian Schulz
Hi,

i getting following error and don't know doing
something wrong.

>mysqlWriteTable(con,"model1",model1,overwrite=T)
Error in "[.data.frame"(value, from:to, drop = FALSE) :
undefined columns selected
In addition: Warning message:
drop argument will be ignored in: "[.data.frame"(value, from:to, drop =
FALSE)

Exactly the same code, database and data
works with 1.7.1 very nice!

Many thanks,
Christian

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] randomly permuting rows or col's of a matrix

2003-11-25 Thread Barry Rowlingson
Pascal A. Niklaus wrote:
Is there an shortcut (compared to "manually" swap row vectors) for the 
random permutation of rows of a matrix?

"sample" does not act on the element row vectors as a whole.

 True, but sample(nrow(foo)) gives you a permutation of 1:nrow(foo) 
that you can subscript with:

 foo[sample(nrow(foo)),]

or

 foo[,sample(ncol(foo))]

Baz

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] randomly permuting rows or col's of a matrix

2003-11-25 Thread Pascal A. Niklaus
Is there an shortcut (compared to "manually" swap row vectors) for the 
random permutation of rows of a matrix?

"sample" does not act on the element row vectors as a whole.

Thanks

Pascal

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


AW: [R] ISOdate() and strptime()

2003-11-25 Thread RINNER Heinrich
Thanks for this clarification.

I have learned in the meantime that it is necessary to be very careful when
using all these POSIX things.
As another example, here is something that made me scratch my head just
yesterday:

When I create a sequence of days that happens to start before and ends in
daylight savings time, I seem to lose a day:

> seq(from = strptime("20030329", format="%Y%m%d"), to= strptime("20030402",
format="%Y%m%d"), by="DSTday")
[1] "2003-03-29 Westeuropäische Normalzeit" "2003-03-30 Westeuropäische
Normalzeit"
[3] "2003-03-31 Westeuropäische Sommerzeit" "2003-04-01 Westeuropäische
Sommerzeit"
> seq(from = strptime("20030329", format="%Y%m%d"), to= strptime("20030402",
format="%Y%m%d"), by="day")
[1] "2003-03-29 00:00:00 Westeuropäische Normalzeit" "2003-03-30 00:00:00
Westeuropäische Normalzeit"
[3] "2003-03-31 01:00:00 Westeuropäische Sommerzeit" "2003-04-01 01:00:00
Westeuropäische Sommerzeit"

Again, my expectations might be wrong here, and there will be good reasons
why I get this result (my OS again?).

But considering all these subtle issues I have encountered so far,
personally I can understand why some people suggested that it may be easier
to use the chron or date package (especially if you are a beginner, have no
prior experience with all these things, and don't want to worry about time
zones, DST, or the pitfalls of your OS).
At least it was useful for me to cross-check the results I obtained with
POSIX with the results using chron.

The POSIX classes are a great thing, but as they are much more powerful,
they are also more complex and have more things to watch out for and more
"traps" to fall in (for me at least ;-)).

-Heinrich.

> -Ursprüngliche Nachricht-
> Von: Prof Brian Ripley [mailto:[EMAIL PROTECTED] 
> Gesendet: Samstag, 22. November 2003 20:56
> An: RINNER Heinrich
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Betreff: Re: [R] ISOdate() and strptime()
> 
> 
> Confirmation that this *is* an OS-specific problem: A professional 
> implementation of the POSIX standard (Solaris) gets all of 
> these correct.
> 
> Your so-called OS lacks any implementation of strptime, so we 
> borrowed one 
> from glibc.  Unfortunately, that is buggy, even to the extent that
> 
> unclass(strptime("2003-22-20", format="%Y-%m-%d"))
> unclass(strptime("2003 22 20", format="%Y %m %d"))
> 
> give different answers!  (And RH8.0 gives the same answers as the 
> substitute code used on R for Windows.)
> 
> I believe Simon Fear owes the R-developers a public apology 
> for his (not
> properly referenced in the archives) reply to this thread.
> 
> BDR
> 
> On Fri, 14 Nov 2003, Prof Brian Ripley wrote:
> 
> > On Fri, 14 Nov 2003, RINNER Heinrich wrote:
> > 
> > > Dear R-people!
> > > 
> > > I am using R 1.8.0, under Windows XP.
> > > While using ISOdate() and strptime(), I noticed the 
> following behaviour when
> > > "wrong" arguments (e.g., months>12) are given to these functions:
> > > 
> > > > ISOdate(year=2003,month=2,day=20) #ok
> > > [1] "2003-02-20 13:00:00 Westeuropäische Normalzeit"
> > > > ISOdate(year=2003,month=2,day=30) #wrong day, but 
> returns a value
> > > [1] "2003-03-02 13:00:00 Westeuropäische Normalzeit"
> > > > ISOdate(year=2003,month=2,day=35) #wrong day, and returns NA
> > > [1] NA
> > > > ISOdate(year=2003,month=2,day=40) #wrong day, but 
> returns a value
> > > [1] "2003-02-04 01:12:00 Westeuropäische Normalzeit"
> > > > ISOdate(year=2003,month=22,day=20) #wrong month, but 
> returns a value
> > > [1] "2003-02-02 21:12:00 Westeuropäische Normalzeit"
> > > 
> > > And almost the same with strptime():
> > > > strptime("2003-02-20", format="%Y-%m-%d")
> > > [1] "2003-02-20"
> > > > strptime("2003-02-30", format="%Y-%m-%d")
> > > [1] "2003-03-02"
> > > > strptime("2003-02-35", format="%Y-%m-%d")
> > > [1] NA
> > > > strptime("2003-02-40", format="%Y-%m-%d")
> > > [1] "2003-02-04"
> > > > strptime("2003-22-20", format="%Y-%m-%d")
> > > [1] NA
> > > 
> > > Is this considered to be a user error ("If you put 
> garbage in, expect to get
> > > garbage out"), or would it be safer to generally return Nas, as in
> > > ISOdate(year=2003,month=2,day=35)?
> > 
> > Expect to get the best guess at what you intended, and 
> expect this to 
> > depend on your OS.
> > 
> > 
> 
> -- 
> Brian D. Ripley,  [EMAIL PROTECTED]
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Parameter estimation in nls - ERRATUM

2003-11-25 Thread Dr Andrew Wilson
Martin Maechler has pointed out to me that I omitted to include my values
for x when circulating my query.  They are a simple rank list from 1 to
26:

x <- 1:26

Thanks,
Andrew

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Parameter estimation in nls

2003-11-25 Thread Peter Dalgaard
Dr Andrew Wilson <[EMAIL PROTECTED]> writes:

> I am trying to fit a rank-frequency distribution with 3 unknowns (a, b
> and k) to a set of data.
> 
> This is my data set:
> 
> y <- c(37047647,27083970,23944887,22536157,20133224,
> 20088720,18774883,18415648,17103717,13580739,12350767,
> 8682289,7496355,7248810,7022120,6396495,6262477,6005496,
> 5065887,4594147,2853307,2745322,454572,448397,275136,268771)
> 
> and this is the fit I'm trying to do:
> 
> nlsfit <- nls(y ~ a * x^k * b^x, start=list(a=5,k=1,b=3))
> 
> (It's a Yule distribution.)
> 
> However, I keep getting:
> 
> "Error in nls(y ~ a * x^k * b^x, start = list(a = 5, k = 1, b = 3)) : 
> singular gradient"
> 
> I guess this has something to do with the parameter start values.
> 
> I was wondering, is there a fully automated way of estimating parameters
> which doesn't need start values close to the final estimates?  I know
> other programs do it, so is it possible in R?

You don't seem to have an x anywhere. Are you making the (apparently
not uncommon) mistake of trying to use a program for fitting nonlinear
relations by least squares to fit a probability density? If so, look
for fitdistr() instead.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Y axis scale in plot.gam

2003-11-25 Thread Simon Wood
> Is there any way to change the y axis range of values in a plot.gam()? I
> need that two different GAM plots to be of the same scale.

- Sorry, I don't think there is a simple way of doing it, except by
editing plot.gam to fix `ylim' by hand It's an obvious thing to want
to do and I'll fix it for the next release.

> Also, it is possible to change the labels?

- The easiest thing to do is to use the `select' argument to plot 1 panel
at a time, with ann=FALSE (see ?par) to supress annotation. Then use the
`title' function to add x and y labels.

_
> Simon Wood [EMAIL PROTECTED]www.stats.gla.ac.uk/~simon/
>>  Department of Statistics, University of Glasgow, Glasgow, G12 8QQ
>>>   Direct telephone: (0)141 330 4530  Fax: (0)141 330 4814

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Parameter estimation in nls

2003-11-25 Thread Dr Andrew Wilson
I am trying to fit a rank-frequency distribution with 3 unknowns (a, b
and k) to a set of data.

This is my data set:

y <- c(37047647,27083970,23944887,22536157,20133224,
20088720,18774883,18415648,17103717,13580739,12350767,
8682289,7496355,7248810,7022120,6396495,6262477,6005496,
5065887,4594147,2853307,2745322,454572,448397,275136,268771)

and this is the fit I'm trying to do:

nlsfit <- nls(y ~ a * x^k * b^x, start=list(a=5,k=1,b=3))

(It's a Yule distribution.)

However, I keep getting:

"Error in nls(y ~ a * x^k * b^x, start = list(a = 5, k = 1, b = 3)) : 
singular gradient"

I guess this has something to do with the parameter start values.

I was wondering, is there a fully automated way of estimating parameters
which doesn't need start values close to the final estimates?  I know
other programs do it, so is it possible in R?

Thanks,
Andrew Wilson

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Bollinger Bands

2003-11-25 Thread Heywood, Giles
You might wish to have a look at the 'its' package for irregular 
time-series on CRAN.  If your prices are in an its called price, 
then the following will get you on your way.  Since it is not efficient 
either in storage or computation, I offer it because it might be 
convenient for display, further processing, etc.

library(its)
...
tmp <- lagdistIts(diff(log(price)),1,20)
rollvol <- its(as.matrix(sqrt(apply(tmp,1,var,na.rm=TRUE

- Giles

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: 24 November 2003 15:19
> To: [EMAIL PROTECTED]
> Subject: [R] Bollinger Bands
> 
> 
> Is there a way to create Bollinger Bands without having to loop on the
> observations of a time serie?
> 
> Any help appreciated
> 
> Thanks
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> 


** 
This is a commercial communication from Commerzbank AG.\ \ T...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help