Re: [R] A question about is.interger function

2006-04-25 Thread Dieter Menne
Leon  gmail.com> writes:

> 
> Hi, All
> I am using is.integer function to examine whether an object is an integer or 
not, but I get such results,
> 
> > x<-3
> > is.integer(x)
> [1] FALSE

x <- 3
typeof(3)
[1] "double"

This may not be a wise decision in hindsight, but probably was made to avoid 
conversion when in the next step you do

x[2] = 1.5

y <- as.integer(3)
 typeof(y)
[1] "integer"
 
> > x<-3:4
> > x
> [1] 3 4
> > is.integer(x)
> [1] TRUE

Here R seems to know that 3 is an integer, which I believe is a bit 
inconsistent, but wise, because mostly you use integers here. Mostly; because 

1.5:3.5 

gives a reasonable result

[1] 1.5 2.5 3.5

Dieter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] A question about is.interger function

2006-04-25 Thread Leon
Hi, All
I am using is.integer function to examine whether an object is an integer or 
not, but I get such results,

> x<-3
> is.integer(x)
[1] FALSE

> x<-3:4
> x
[1] 3 4
> is.integer(x)
[1] TRUE

Seems that the is.integer cannot handle scalers, 

> is.integer(5)
[1] FALSE

> is.integer(5:6)
[1] TRUE

Is this a bug in R or I made some mistakes? I am using R 2.2.1 under Windows XP

Thanks a lot!

Leon


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Titles in MAplots

2006-04-25 Thread Spencer Graves
  I'm replying now for three reasons:

  (1) I'd never heard of "MAplot" before;  I assume it must be a 
function in some package you've downloaded, but I don't know which (and 
I couldn't find it mentioned in your email).

  (2) I wondered if "MAplot" might have something to do with a moving 
average (MA in the time series lingo).

  (3) I haven't seen a reply to your email after a few days, so I 
thought I'd attempt to reply.

  RSiteSearch("MAplot") just produced 16 hits.  Have you reviewed 
these?  My review of some of these hits identified 3 different packages, 
none of which is currently downloadable from CRAN.  If you've reviewed 
these 16 hits, have you tried just listing the function by typing 
"MAplot" without parentheses at a command line?  If yes, have you tried 
copying the function into a script file, then saying "debug(MAplot)", 
then trying to execute an "MAplot(...)" command and following through 
the function line by line.

  I hope these comments might help you, but it's difficult for me to 
say whether they might help or not.  If your post were more consistent 
with the suggestions in the posting guide! 
"www.R-project.org/posting-guide.html", I might have been able to help 
more -- or you might have gotten a reply from someone else days ago.

  hope this helps.
  spencer graves
p.s.  I just found your post from 4/12 asking if there is "a way of 
reducing the label size".  If you can get access to the character 
strings, then "substring" can be used to extract any subset of the 
characters you like.  "nchar" will tell you how many characters there 
are.  "regexpr" will find the first occurrance of any specific 
character(s).  "strsplit" will split a longer string on each occurrance 
of a particular character.

Brooks, Anthony B wrote:

> Hi
> Does anyone know how to set the titles in MAplots to just show the CEL file 
> name?
> So far I have;
>  
> #define 'Array' as object containing CEL names
> Array <- col.names(Data)
> #open bmp and make a separate bmp for each MAplot
> bmp("C:/MAplot%03d.bmp")
> #remove the annotation and minimise margins
> par(ann=FALSE)
> par(mar=c(1,1,1,1))
> #MAplot
> MAplot(Data...
>  
> Does anyone know the correct arguments? Do I need to create another parameter 
> value?
>  
> Tony
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MacOSX package install problem: pkgs quadprog & tseries

2006-04-25 Thread Saptarshi Guha
Hi,
I had the same problem while compiling RGL. This is what i did (from  
http://wiki.urbanek.info/index.cgi?TigeR)
Edit these two files as root,

/Library/Frameworks/R.framework/Resources/bin/SHLIB
/Library/Frameworks/R.framework/Resources/etc/Makeconf

and comment out all references to cc_dynamic (i didnt remove anything  
else).
Should work after that.

Rgds
Saptarshi

Saptarshi Guha | [EMAIL PROTECTED] | http://www.stat.purdue.edu/~sguha
When the going gets weird, the weird turn pro.
-- Hunter S. Thompson

On Apr 25, 2006, at 10:51 PM, William Asquith wrote:

>
> I upgraded to R-2.2.1 on two PPC G5 computers today. Further I want
> to work with the tseries package for the first time.
>
> As root with
>
> R CMD INSTALL  tseries_0.10-0.tar.gz
>
> I get the following
>
> gcc-3.3 -bundle -flat_namespace -undefined suppress -L/usr/local/lib -
> o tseries.so arma.o bdstest.o boot.o dsumsl.o garch.o ppsum.o
> tsutils.o -framework vecLib -L/usr/local/lib/gcc/powerpc-apple-
> darwin6.8/3.4.2 -lg2c -lSystem -L/usr/local/lib/gcc/powerpc-apple-
> darwin6.8/3.4.2 -lg2c -lSystem -lcc_dynamic -F/Library/Frameworks/
> R.framework/.. -framework R
> ld: can't locate file for: -lcc_dynamic
> make: *** [tseries.so] Error 1
> ERROR: compilation failed for package 'tseries'
> ** Removing '/Library/Frameworks/R.framework/Versions/2.2/Resources/
> library/tseries'
>
> the -lcc_dynamic is also problem with quadprog. I've read the
> appropriate R manuals; however, I don't know where to go from here.
>
> Thanks,
>
> William Asquith (author package lmomco)
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting- 
> guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] register s3 object in package

2006-04-25 Thread Steven Lacey
Hi, 
 
I have a package with s4 methods that match against registered s3 methods
(using setOldClass). When I call R CMD INSTALL --build I get warnings that
methods are being created for objects with no definition. How do I embed 
setOldClass(c("pam","partition")) into the package so that when the s4
methods are generated the s3 pam object is already registered?
 
Thanks,
Steve


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MacOSX package install problem: pkgs quadprog & tseries

2006-04-25 Thread William Asquith

I upgraded to R-2.2.1 on two PPC G5 computers today. Further I want  
to work with the tseries package for the first time.

As root with

R CMD INSTALL  tseries_0.10-0.tar.gz

I get the following

gcc-3.3 -bundle -flat_namespace -undefined suppress -L/usr/local/lib - 
o tseries.so arma.o bdstest.o boot.o dsumsl.o garch.o ppsum.o  
tsutils.o -framework vecLib -L/usr/local/lib/gcc/powerpc-apple- 
darwin6.8/3.4.2 -lg2c -lSystem -L/usr/local/lib/gcc/powerpc-apple- 
darwin6.8/3.4.2 -lg2c -lSystem -lcc_dynamic -F/Library/Frameworks/ 
R.framework/.. -framework R
ld: can't locate file for: -lcc_dynamic
make: *** [tseries.so] Error 1
ERROR: compilation failed for package 'tseries'
** Removing '/Library/Frameworks/R.framework/Versions/2.2/Resources/ 
library/tseries'

the -lcc_dynamic is also problem with quadprog. I've read the  
appropriate R manuals; however, I don't know where to go from here.

Thanks,

William Asquith (author package lmomco)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R Installation problem

2006-04-25 Thread Wang, Meihua
Dear R users,

I was trying to install "R" on an ALPHA running OSF1/V5.1 for individual
use, but warning messages came out when I run "./configure" and "make".
The following is part of output:

% ./configure  --prefix=$home/r
checking build system type... alphaev67-dec-osf5.1b
checking host system type... alphaev67-dec-osf5.1b
loading site script './config.site'
loading build specific script './config.site'
checking for pwd... /bin/pwd
checking whether builddir is srcdir... yes
checking for working aclocal... found
checking for working autoconf... found
checking for working automake... found
checking for working autoheader... found
checking for working makeinfo... missing
checking for gawk... gawk
checking for egrep... grep -E
checking whether ln -s works... yes
checking for ranlib... ranlib
checking for bison... bison -y
checking for ar... ar
checking for a BSD-compatible install... tools/install-sh -c
checking for sed... /usr/psc/bin/sed
checking for less... /usr/psc/gnu/bin/less
checking for perl... /usr/psc/bin/perl
checking whether perl version is at least 5.004... yes
checking for dvips... no
checking for tex... no
checking for latex... no
configure: WARNING: you cannot build DVI versions of the R manuals
checking for makeindex... no
checking for pdftex... no
checking for pdflatex... no
configure: WARNING: you cannot build PDF versions of the R manuals

   
   ...

R is now configured for alphaev67-dec-osf5.1b

  Source directory:  .
  Installation directory:/usr/users/1/mwang2/r

  C compiler:gcc -mieee-with-inexact -g -O2
  C++ compiler:  g++ -mieee -g -O2
  Fortran compiler:  g77 -mieee -g -O2

  Interfaces supported:  X11
  External libraries:readline
  Additional capabilities:   MBCS, NLS
  Options enabled:   R profiling

  Recommended packages:  yes

configure: WARNING: you cannot build DVI versions of the R manuals
configure: WARNING: you cannot build info or html versions of the R
manuals
configure: WARNING: you cannot build PDF versions of the R manuals
% make
Make: Cannot open /share/make/vars.mk.  Stop.
%

I read R Installation and Administration and tried "gmake" instead of
"make", unfortunatly it didn't help either.

I was wondering if anyone can help me out?

Thanks,

Meihua


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Generalized linear mixed models

2006-04-25 Thread Peter Tait
Hi,

I would like to fit a generalized linear mixed model (glmm) with a 3 
level response.

My data is from a longitudinal study with multiple observations/patient 
and multiple patients / country.

Is there an R package that will fit a proportional odds, continuation 
ratio, or adjacent categories glmm?

Thanks
Peter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] regression modeling

2006-04-25 Thread Frank E Harrell Jr
Berton Gunter wrote:
> May I offer a perhaps contrary perspective on this.
> 
> Statistical **theory** tells us that the precision of estimates improves as
> sample size increases. However, in practice, this is not always the case.
> The reason is that it can take time to collect that extra data, and things
> change over time. So the very definition of what one is measuring, the
> measurement technology by which it is measured (think about estimating tumor
> size or disease incidence or underemployment, for example), the presence or
> absence of known or unknown large systematic effects, and so forth may
> change in unknown ways. This defeats, or at least complicates, the
> fundamental assumption that one is sampling from a (fixed) population or
> stable (e.g. homogeneous, stationary) process, so it's no wonder that all
> statistical bets are off. Of course, sometimes the necessary information to
> account for these issues is present, and appropriate (but often complex)
> statistical analyses can be performed. But not always.
> 
> Thus, I am suspicious, cynical even, about those who advocate collecting
> "all the data" and subjecting the whole vast heterogeneous mess to arcane
> and ever more computer intensive (and adjustable parameter ridden) "data
> mining" algorithms to "detect trends" or "discover knowledge." To me, it
> sounds like a prescription for "turning on all the equipment and waiting to
> see what happens" in the science lab instead of performing careful,
> well-designed experiments.
> 
> I realize, of course, that there are many perfectly legitimate areas of
> scientific research, from geophysics to evolutionary biology to sociology,
> where one cannot (easily) perform planned experiments. But my point is that
> good science demands that in all circumstances, and especially when one
> accumulates and attempts to aggregata data taken over spans of time and
> space, one needs to beware of oversimplification, including statistical
> oversimplification. So interrogate the measurement, be skeptical of
> stability, expect inconsistency. While "all models are wrong but some are
> useful" (George Box), the second law tells us that entropy still rules.
> 
> (Needless to say, public or private contrary views are welcome).
> 
> -- Bert Gunter
> Genentech Non-Clinical Statistics
> South San Francisco, CA

Bert raises some great points.  Ignoring the important issues of
doing good research and stability in the meaning of data as time marches 
on, it is generally true that the larger the sample size the greater the 
complexity of the model we can afford to fit, and the better the fit of 
the model.  This is the "AIC" school.  The "BIC" school assumes there is 
an actual model out there waiting for us, of finite dimension, and the 
complexity of our models should not grow very fast as N increases.  I 
find the "AIC" approach gives me more accurate predictions.
-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] lwd - Windows

2006-04-25 Thread Francisco J. Zagmutt
Dear all

Is there a way or trick in windows to plot a line width that is not an 
integer i.e 1.5?
I am aware that the documentation for window devices states "Line widths as 
controlled by par(lwd=) are in multiples of the pixel size, and multiples < 
1 are silently converted to 1" but I was wondering if there is a workaround 
this.

Also, IMHO the documentation for lwd in par may need some clarification 
since it states:
"The line width, a positive number, defaulting to 1. The interpretation is 
device-specific, and some devices do not implement line widths less than 
one."  Perhaps it would be useful for the users to describe the behavior for 
the most important devices, and also to state that the number is an integer 
(at least for windows) and not just a positive number?


version

platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status
major2
minor2.1
year 2005
month12
day  20
svn rev  36812
language R


Thanks

Francisco

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Help needed

2006-04-25 Thread Anamika Chaudhuri
As my earlier email said I am not getting the maxcls. I do get numbers coded as 
1,2,3 for BMIGRP(when I print BMIGRP) but not getting the max of (1,2,3) which 
should be 3, I guess.
   
  Thanks for your help
  Anamika
  

Anamika Chaudhuri <[EMAIL PROTECTED]> wrote:
> dset1<-cbind(AGE,BMI,DEATH)
> BMIGRP<-cut(BMI,breaks=c(14,20,25,57),right=TRUE)
> levels(BMIGRP)
[1] "(14,20]" "(20,25]" "(25,57]"
> BMIGRP1<-as.numeric(BMIGRP)
> AGEGRP<-floor(AGE/10)-2
> dset<-cbind(AGEGRP,BMIGRP1,DEATH)
> maxage<-max(dset[,1])
> minage<-min(dset[,1])
> maxcls<-max(dset[,2])
> maxcls
[1] NA
   
  Why doesnt it give me a no for maxcls then?
   
  Thanks.
  

"Richard M. Heiberger" <[EMAIL PROTECTED]> wrote:
  > x <- rnorm(100)
> xx <- cut(x,3)
> levels(xx)
[1] "(-2.37,-0.716]" "(-0.716,0.933]" "(0.933,2.58]" 
> as.numeric(xx)


-



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Plotting the correlation

2006-04-25 Thread Liaw, Andy
Try:

image.matrix <-
function (x, rlab = if (is.null(rownames(x))) 1:nrow(x) else
rownames(x), 
  clab = if (is.null(colnames(x))) 1:ncol(x) else colnames(x), 
  cex.row=0.7, cex.col=0.7, main = deparse(substitute(x)), ...)
{
op <- par(mgp=c(2, .3, 0))
on.exit(par(op))
nr <- nrow(x)
nc <- ncol(x)
image(1:nc, 1:nr, t(x)[, nr:1], axes = FALSE, xlab = "", 
  ylab = "", main = main, ...)
axis(2, 1:nr, rev(rlab), cex.axis=cex.col, tick=FALSE, las=2)
axis(1, 1:nc, clab, cex.axis=cex.row, tick=FALSE, las=2)
invisible(x)
}

image(yourMatrix)

Andy

From: Sasha Pustota
> 
> Suppose I compute a correlation matrix ...
> 
> r <- cor(log(rbind(iris3[,,1], iris3[,,2], iris3[,,3])))
> 
> and then want to plot the values in color, e.g.
> image(r, col = terrain.colors(200))
> 
> Why the matrix in the plot is rotated? The diagonal Rii=1 
> goes from the bottom left to the upper right corner.
> 
> Is there a way to plot the color values in the same 
> orientation as the correlation matrix below? -
> 
> > print.table(r, digits=2)
>  Sepal L. Sepal W. Petal L. Petal W.
> Sepal L. 1.00-0.11 0.84 0.80
> Sepal W.-0.11 1.00-0.47-0.45
> Petal L. 0.84-0.47 1.00 0.97
> Petal W. 0.80-0.45 0.97 1.00
> 
> __
> R-help@stat.math.ethz.ch mailing list 
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Plotting the correlation

2006-04-25 Thread Sasha Pustota
Suppose I compute a correlation matrix ...

r <- cor(log(rbind(iris3[,,1], iris3[,,2], iris3[,,3])))

and then want to plot the values in color, e.g.
image(r, col = terrain.colors(200))

Why the matrix in the plot is rotated? The diagonal Rii=1 goes from
the bottom left to the upper right corner.

Is there a way to plot the color values in the same orientation as the
correlation matrix below? -

> print.table(r, digits=2)
 Sepal L. Sepal W. Petal L. Petal W.
Sepal L. 1.00-0.11 0.84 0.80
Sepal W.-0.11 1.00-0.47-0.45
Petal L. 0.84-0.47 1.00 0.97
Petal W. 0.80-0.45 0.97 1.00

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] regression modeling

2006-04-25 Thread Berton Gunter
May I offer a perhaps contrary perspective on this.

Statistical **theory** tells us that the precision of estimates improves as
sample size increases. However, in practice, this is not always the case.
The reason is that it can take time to collect that extra data, and things
change over time. So the very definition of what one is measuring, the
measurement technology by which it is measured (think about estimating tumor
size or disease incidence or underemployment, for example), the presence or
absence of known or unknown large systematic effects, and so forth may
change in unknown ways. This defeats, or at least complicates, the
fundamental assumption that one is sampling from a (fixed) population or
stable (e.g. homogeneous, stationary) process, so it's no wonder that all
statistical bets are off. Of course, sometimes the necessary information to
account for these issues is present, and appropriate (but often complex)
statistical analyses can be performed. But not always.

Thus, I am suspicious, cynical even, about those who advocate collecting
"all the data" and subjecting the whole vast heterogeneous mess to arcane
and ever more computer intensive (and adjustable parameter ridden) "data
mining" algorithms to "detect trends" or "discover knowledge." To me, it
sounds like a prescription for "turning on all the equipment and waiting to
see what happens" in the science lab instead of performing careful,
well-designed experiments.

I realize, of course, that there are many perfectly legitimate areas of
scientific research, from geophysics to evolutionary biology to sociology,
where one cannot (easily) perform planned experiments. But my point is that
good science demands that in all circumstances, and especially when one
accumulates and attempts to aggregata data taken over spans of time and
space, one needs to beware of oversimplification, including statistical
oversimplification. So interrogate the measurement, be skeptical of
stability, expect inconsistency. While "all models are wrong but some are
useful" (George Box), the second law tells us that entropy still rules.

(Needless to say, public or private contrary views are welcome).

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 
"The business of the statistician is to catalyze the scientific learning
process."  - George E. P. Box
 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> Sent: Tuesday, April 25, 2006 12:10 PM
> To: bogdan romocea
> Cc: r-help
> Subject: Re: [R] regression modeling
> 
> i believe it is not a question only related to regression 
> modeling. The
> correlation between the sample size and confidence of 
> prediction in data
> mining is not as clear as traditional stat approach.  My 
> concern is not in
> that theoretical discussion but more practical, looking for a 
> good algorithm
> when response variable is continuous when large dataset is concerned.
> 
> On 4/25/06, bogdan romocea <[EMAIL PROTECTED]> wrote:
> >
> > There is an aspect, worthy of careful consideration, you 
> don't seem to
> > be aware of. I'll ask the question for you: How does the
> > explanatory/predictive potential of a dataset vary as the 
> dataset gets
> > larger and larger?
> >
> >
> > > -Original Message-
> > > From: [EMAIL PROTECTED]
> > > [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> > > Sent: Monday, April 24, 2006 12:45 PM
> > > To: r-help
> > > Subject: [R] regression modeling
> > >
> > > Hi, there:
> > > I am looking for a regression modeling (like regression
> > > trees) approach for
> > > a large-scale industry dataset. Any suggestion on a package
> > > from R or from
> > > other sources which has a decent accuracy and scalability? Any
> > > recommendation from experience is highly appreciated.
> > >
> > > Thanks,
> > >
> > > Weiwei
> > >
> > > --
> > > Weiwei Shi, Ph.D
> > >
> > > "Did you always know?"
> > > "No, I did not. But I believed..."
> > > ---Matrix III
> > >
> > >   [[alternative HTML version deleted]]
> > >
> > > __
> > > R-help@stat.math.ethz.ch mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide!
> > > http://www.R-project.org/posting-guide.html
> > >
> >
> 
> 
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Handling large dataset & dataframe

2006-04-25 Thread Sachin J
Mark:
   
  Thanx for the pointers. As suggested I will explore scan() method. 
   
  Andy:
   
  How can I use colClasses in my case. I tried it unsuccessfully. Encountering 
following error. 
  
coltypes<-
c("numeric","factor","numeric","numeric","numeric","numeric","factor",
"numeric","numeric","factor","factor","numeric","numeric","numeric","n
"numeric","numeric","numeric","numeric")

  mydf <- read.csv("C:/temp/data.csv", header=FALSE, colClasses = 
coltypes, strip.white=TRUE)

 ERROR: Error in scan(file = file, what = what, sep = sep, quote =  quote, dec 
= dec, : scan() expected 'a real', got 'V1'

  Thank again.
   
  Sachin
  
"Liaw, Andy" <[EMAIL PROTECTED]> wrote:
  Much easier to use colClasses in read.table, and in many cases just as fast
(or even faster).

Andy

From: Mark Stephens
> 
> From ?scan: "the *type* of what gives the type of data to be 
> read". So list(integer(), integer(), double(), raw(), ...) In 
> your code all columns are being read as character regardless 
> of the contents of the character vector.
> 
> I have to admit that I have added the *'s in *type*. I have 
> been caught out by this too. Its not the most convenient way 
> to specify the types of a large number of columns either. As 
> you have a lot of columns you might want to do something like 
> this: as.list(rep(integer(1),250)), assuming your dummies 
> are together, to save typing. Also storage.mode() is useful 
> to tell you the precise type (and therefore size) of an 
> object e.g. sapply(coltypes,
> storage.mode) is actually the types scan() will use. Note 
> that 'numeric' could be 'double' or 'integer' which are 
> important in your case to fit inside the 1GB limit, because 
> 'integer' (4 bytes) is half 'double' (8 bytes).
> 
> Perhaps someone on r-devel could enhance the documentation to 
> make "type" stand out in capitals in bold in help(scan)? Or 
> maybe scan could be clever enough to accept a character 
> vector 'what'. Or maybe I'm missing a good reason why this 
> isn't possible - anyone? How about allowing a character 
> vector length one, with each character representing the type 
> of that column e.g. what="DDCD" would mean 4 integers 
> followed by 2 double's followed by a character column, 
> followed finally by a double column, 8 columns in total. 
> Probably someone somewhere has done that already, but I'm not 
> aware anyone has wrapped it up conveniently?
> 
> On 25/04/06, Sachin J wrote:
> >
> > Mark:
> >
> > Here is the information I didn't provide in my earlier 
> post. R version 
> > is R2.2.1 running on Windows XP. My dataset has 16 variables with 
> > following data type.
> > ColNumber: 1 2 3 ...16
> > Datatypes:
> >
> > 
> "numeric","numeric","numeric","numeric","numeric","numeric","character
> > 
> ","numeric","numeric","character","character","numeric","numeric","num
> > eric","numeric","numeric","numeric","numeric"
> >
> > Variable (2) which is numeric and variables denoted as 
> character are 
> > to be treated as dummy variables in the regression.
> >
> > Search in R help list suggested I can use read.csv with colClasses 
> > option also instead of using scan() and then converting it to 
> > dataframe as you suggested. I am trying both these methods 
> but unable 
> > to resolve syntactical error.
> >
> > >coltypes<-
> > 
> c("numeric","factor","numeric","numeric","numeric","numeric","factor",
> > 
> "numeric","numeric","factor","factor","numeric","numeric","numeric","n
> > umeric","numeric","numeric","numeric")
> >
> > >mydf <- read.csv("C:/temp/data.csv", header=FALSE, colClasses = 
> > >coltypes,
> > strip.white=TRUE)
> >
> > ERROR: Error in scan(file = file, what = what, sep = sep, quote = 
> > quote, dec = dec, :
> > scan() expected 'a real', got 'V1'
> >
> > No idea whats the problem.
> >
> > AS PER YOUR SUGGESTION I TRIED scan() as follows:
> >
> >
> > 
> >coltypes<-c("numeric","factor","numeric","numeric","numeric","numeric
> > 
> >","factor","numeric","numeric","factor","factor","numeric","n
> umeric","numeric","numeric","numeric","numeric","numeric")
> > >x<-scan(file = 
> "C:/temp/data.dbf",what=as.list(coltypes),sep=",",quiet=TRUE,skip=1)
> >
> > >names(x)<-scan(file = "C:/temp/data.dbf",what="",nlines=1, sep=",")
> > >x<-as.data.frame(x)
> >
> > This is working fine but x has no data in it and contains
> > > x
> >
> > [1] X._. NA. NA..1 NA..2 NA..3 NA..4 NA..5 NA..6 
> NA..7 NA..8
> > NA..9 NA..10 NA..11
> > [14] NA..12 NA..13 NA..14 NA..15 NA..16
> > <0 rows> (or 0-length row.names)
> >
> > Please let me know how to properly use scan or colClasses option.
> >
> > Sachin
> >
> >
> >
> >
> >
> > *Mark Stephens * wrote:
> >
> > Sachin,
> > With your dummies stored as integer, the size of your object would 
> > appear to be 35 * (4*250 + 8*16) bytes = 376MB. You 
> said "PC" but 
> > did not provide R version information, assuming windows then ...
> > With 1GB RAM you should be able to load a 376MB object into 
> memory. If you
> > can store the dummies as 

[R] need automake/autoconf help to build RnetCDF and ncdf packages

2006-04-25 Thread Paul Johnson
I imagine this "where are your header files" problem comes up in other
packages, so I'm asking this as a general R question. How should
configure scripts be re-written so they look in more places?


Briefly, the problem is that Fedora-Extras installs the header files
in a subdirectory /usr/include/netcdf-3 rather than /usr/include:

# rpm -ql netcdf-devel
/usr/include/netcdf-3
/usr/include/netcdf-3/ncvalues.h
/usr/include/netcdf-3/netcdf.h
/usr/lib/netcdf-3/libnetcdf.a
/usr/lib/netcdf-3/libnetcdf_c++.a
/usr/lib/netcdf-3/libnetcdf_g77.a

Last week I posted in this list that I re-built the Fedora-Extras
netcdf rpm so that it would have more standard installation, and then
I was able to make RNetCDF work.

In the meanwhile,  I posted in bugzilla.redhat.com asking if they
might use the standard packaging, but their response is an adamant
refusal:

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=189734

When netcdf updates are issued in the Fedora-Extras network, the
special hacks I put in to un-do their special hacks are lost, and
netcdf programs don't work anymore.

The attempt to build "ncdf" fails inside R or on the command line, but
it gives a GOOD HINT about a command line work around:

# R CMD INSTALL ncdf_1.5.tar.gz
[...]

checking /sw/include/netcdf.h presence... no
checking for /sw/include/netcdf.h... no

Fatal error: I cannot find the directory that holds the netcdf include
file netcdf.h!
You can specify it as follows:
  ./configure --with-netcdf_incdir=directory_with_file_netcdf.h

 *** Special note for R CMD INSTALL users: *
 The syntax for specifying multiple --configure-args does not seem to be
 well documented in R.  If you have installed the netcdf include and library
 directories in some non-standard location, you can specify BOTH these
 during the R CMD INSTALL process using the following syntax:

   R CMD INSTALL
--configure-args="-with-netcdf_incdir=/path/to/netcdf/incdir
-with-netcdf_libdir=/path/to/netcdf/libdir" ncdf_1.1.tar.gz

 where you should, of course, specify your own netcdf include and library
 directories, and the actual package name.
 ***


I found that the following did work!

# R CMD INSTALL
--configure-args="-with-netcdf_incdir=/usr/include/netcdf-3
-with-netcdf_libdir=/usr/lib/netcdf-3" ncdf_1.5.tar.gz

It is not the best solution, because special administrative effort is
required. And the "install.packages" approach inside R won't work.

However, with RNetCDF, the problem is slightly worse, and no such
helpful message appears:

# R CMD INSTALL  RNetCDF_1.1-3.tar.gz
* Installing *source* package 'RNetCDF' ...
checking for gcc... gcc
checking for C compiler default output... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for executable suffix...
checking for object suffix... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for main in -lnetcdf... no
configure: error: netcdf library not found
ERROR: configuration failed for package 'RNetCDF'
** Removing '/usr/lib/R/library/RNetCDF'


I have no reason to doubt that the Fedora-Extras authors are right,
and that some changes in the configure scripts for these packages are
required.

In RnetCDF's configure.ac file, I see the place where it specifies the
NETCDF_INCDIR

if test -z "${NETCDF_PATH}"; then
AC_CHECK_FILE(/usr/local/include/netcdf.h,
[USR_LOCAL_NETCDF_H=TRUE], [USR_LOCAL_NETCDF_H=FALSE])
if test "${USR_LOCAL_NETCDF_H}" = TRUE; then
NETCDF_INCDIR="/usr/local/include"
NETCDF_LIBDIR="/usr/local/lib"
NETCDF_LIBNAME="netcdf"
HAVE_NETCDF_H=TRUE
elif test "${HAVE_NETCDF_H}" = FALSE; then
AC_CHECK_FILE(/usr/include/netcdf.h,
[USR_NETCDF_H=TRUE], [USR_NETCDF_H=FALSE])
if test "${USR_NETCDF_H}" = TRUE; then
NETCDF_INCDIR="/usr/include"
NETCDF_LIBDIR="/usr/lib"
NETCDF_LIBNAME="netcdf"
HAVE_NETCDF_H=TRUE
fi
fi
else
NETCDF_INCDIR="${NETCDF_PATH}/include"
NETCDF_LIBDIR="${NETCDF_PATH}/lib"
NETCDF_LIBNAME="netcdf"
AC_CHECK_FILE(${NETCDF_INCDIR}/netcdf.h,
[INCDIR_NETCDF_H=TRUE], [INCDIR_NETCDF_H=FALSE])
if test "${INCDIR_NETCDF_H}" = TRUE; then
HAVE_NETCDF_H=TRUE
fi

fi

I've tried fiddling around in this, and then typing

#autoconf configure.ac > newconfigure

sh ./newconfigure

But it always ends the same:

checking for main in -lnetcdf... no
: error: netcdf library not found

So, is there somebody here who know how configure scripts ought to be
written to accomodate this?



--
Paul E. Johnson
Professor, Political Science
1541 Lilac Lane, Room 504
University of Kansas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do r

Re: [R] trellis.par.get without opening a device?

2006-04-25 Thread Dieter Menne
Deepayan Sarkar  gmail.com> writes:
> > I am using the Deepayan's Sweave trick to set graphics parameters for all
> > graphs:
> >
> > ltheme = canonical.theme(color=TRUE)
> > sup = trellis.par.get("superpose.line")
> > ltheme$superpose.line$col = c('black',"red","blue","#e3","green",
> > "gray")
> > 
> 
> Why do you need to call trellis.par.get? I don't see you using 'sup'
> (maybe in the ... part), and I don't see why you would need to.
> 

O, shame... you are right, the sup is totally nonsense. It's a relic from my 
standard way to find out what's going on inside Sarkastic world of lattice  
parameters. Someone volunteering to make show.settings() a self-completing 
documentation whenever a new parameter comes in?

Thanks a lot. Mea stupid.

Dieter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Handling large dataset & dataframe

2006-04-25 Thread Liaw, Andy
Much easier to use colClasses in read.table, and in many cases just as fast
(or even faster).

Andy

From: Mark Stephens
> 
> From ?scan: "the *type* of what gives the type of data to be 
> read". So list(integer(), integer(), double(), raw(), ...) In 
> your code all columns are being read as character regardless 
> of the contents of the character vector.
> 
> I have to admit that I have added the *'s in *type*.  I have 
> been caught out by this too.  Its not the most convenient way 
> to specify the types of a large number of columns either.  As 
> you have a lot of columns you might want to do something like 
> this:  as.list(rep(integer(1),250)), assuming your dummies 
> are together, to save typing.  Also storage.mode() is useful 
> to tell you the precise type (and therefore size) of an 
> object e.g. sapply(coltypes,
> storage.mode) is actually the types scan() will use.  Note 
> that 'numeric' could be 'double' or 'integer' which are 
> important in your case to fit inside the 1GB limit, because 
> 'integer' (4 bytes) is half 'double' (8 bytes).
> 
> Perhaps someone on r-devel could enhance the documentation to 
> make "type" stand out in capitals in bold in help(scan)?  Or 
> maybe scan could be clever enough to accept a character 
> vector 'what'.  Or maybe I'm missing a good reason why this 
> isn't possible - anyone? How about allowing a character 
> vector length one, with each character representing the type 
> of that column e.g.  what="DDCD" would mean 4 integers 
> followed by 2 double's followed by a character column, 
> followed finally by a double column,  8 columns in total.  
> Probably someone somewhere has done that already, but I'm not 
> aware anyone has wrapped it up conveniently?
> 
> On 25/04/06, Sachin J <[EMAIL PROTECTED]> wrote:
> >
> >  Mark:
> >
> > Here is the information I didn't provide in my earlier 
> post. R version 
> > is R2.2.1 running on Windows XP.  My dataset has 16 variables with 
> > following data type.
> > ColNumber:   1  2  3  ...16
> > Datatypes:
> >
> > 
> "numeric","numeric","numeric","numeric","numeric","numeric","character
> > 
> ","numeric","numeric","character","character","numeric","numeric","num
> > eric","numeric","numeric","numeric","numeric"
> >
> > Variable (2) which is numeric and variables denoted as 
> character are 
> > to be treated as dummy variables in the regression.
> >
> > Search in R help list  suggested I can use read.csv with colClasses 
> > option also instead of using scan() and then converting it to 
> > dataframe as you suggested. I am trying both these methods 
> but unable 
> > to resolve syntactical error.
> >
> > >coltypes<-
> > 
> c("numeric","factor","numeric","numeric","numeric","numeric","factor",
> > 
> "numeric","numeric","factor","factor","numeric","numeric","numeric","n
> > umeric","numeric","numeric","numeric")
> >
> > >mydf <- read.csv("C:/temp/data.csv", header=FALSE, colClasses = 
> > >coltypes,
> > strip.white=TRUE)
> >
> > ERROR: Error in scan(file = file, what = what, sep = sep, quote = 
> > quote, dec = dec,  :
> > scan() expected 'a real', got 'V1'
> >
> > No idea whats the problem.
> >
> > AS PER YOUR SUGGESTION I TRIED scan() as follows:
> >
> >
> > 
> >coltypes<-c("numeric","factor","numeric","numeric","numeric","numeric
> > 
> >","factor","numeric","numeric","factor","factor","numeric","n
> umeric","numeric","numeric","numeric","numeric","numeric")
> > >x<-scan(file = 
> "C:/temp/data.dbf",what=as.list(coltypes),sep=",",quiet=TRUE,skip=1)
> >
> > >names(x)<-scan(file = "C:/temp/data.dbf",what="",nlines=1, sep=",")
> > >x<-as.data.frame(x)
> >
> > This is working fine but x has no data in it and contains
> > > x
> >
> >  [1] X._.   NA.NA..1  NA..2  NA..3  NA..4  NA..5  NA..6 
>  NA..7  NA..8
> > NA..9  NA..10 NA..11
> > [14] NA..12 NA..13 NA..14 NA..15 NA..16
> > <0 rows> (or 0-length row.names)
> >
> > Please let me know how to properly use scan or colClasses option.
> >
> > Sachin
> >
> >
> >
> >
> >
> > *Mark Stephens <[EMAIL PROTECTED]>* wrote:
> >
> > Sachin,
> > With your dummies stored as integer, the size of your object would 
> > appear to be 35 * (4*250 + 8*16) bytes = 376MB. You 
> said "PC" but 
> > did not provide R version information, assuming windows then ...
> > With 1GB RAM you should be able to load a 376MB object into 
> memory. If you
> > can store the dummies as 'raw' then object size is only 126MB.
> > You don't say how you attempted to load the data. Assuming 
> your input data
> > is in text file (or can be) have you tried scan()? Setup the 'what'
> > argument
> > with length 266 and make sure the dummy column are set to 
> integer() or
> > raw(). Then x = scan(...); class(x)=" data.frame".
> > What is the result of memory.limit()? If it is 256MB or 
> 512MB, then try
> > starting R with --max-mem-size=800M (I forget the syntax 
> exactly). Leave a
> > bit of room below 1GB. Once the object is in memory R may 
> need to copy it
>

[R] variable labels in pairs

2006-04-25 Thread Christos Hatzis
Hello,

I am using 'pairs' to produce a scatter plot matrix with a custom
upper.panel function to plot the Pearson's correlation coefficients for the
pairs of variables.  

I would like to be able to use the actual variable names as subscripts in
rho in the printed text.  I know these labels are accessible to diag.panel
but cannot find a good way to access them within panel.cor.

Any suggestions?

Thank you.
-Christos  

### --- code snip ---

panel.cor <-
function(x, y, digits=3, subscripts, groups, cex.cor=2) {
usr <- par("usr"); on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- abs(cor(x, y))
txt <- format(c(r, 0.123456789), digits=digits)[1]
txt <- substitute(italic(rho) == txt)
text(0.5, 0.5, txt, cex = cex.cor, col="blue")
}
pairs(my.df, upper.panel=panel.cor)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Handling large dataset & dataframe

2006-04-25 Thread Mark Stephens
From ?scan: "the *type* of what gives the type of data to be read".
So list(integer(), integer(), double(), raw(), ...)
In your code all columns are being read as character regardless of the
contents of the character vector.

I have to admit that I have added the *'s in *type*.  I have been caught out
by this too.  Its not the most convenient way to specify the types of a
large number of columns either.  As you have a lot of columns you might want
to do something like this:  as.list(rep(integer(1),250)), assuming your
dummies are together, to save typing.  Also storage.mode() is useful to tell
you the precise type (and therefore size) of an object e.g. sapply(coltypes,
storage.mode) is actually the types scan() will use.  Note that 'numeric'
could be 'double' or 'integer' which are important in your case to fit
inside the 1GB limit, because 'integer' (4 bytes) is half 'double' (8
bytes).

Perhaps someone on r-devel could enhance the documentation to make "type"
stand out in capitals in bold in help(scan)?  Or maybe scan could be clever
enough to accept a character vector 'what'.  Or maybe I'm missing a good
reason why this isn't possible - anyone? How about allowing a character
vector length one, with each character representing the type of that column
e.g.  what="DDCD" would mean 4 integers followed by 2 double's followed
by a character column, followed finally by a double column,  8 columns in
total.  Probably someone somewhere has done that already, but I'm not aware
anyone has wrapped it up conveniently?

On 25/04/06, Sachin J <[EMAIL PROTECTED]> wrote:
>
>  Mark:
>
> Here is the information I didn't provide in my earlier post. R version is
> R2.2.1 running on Windows XP.  My dataset has 16 variables with following
> data type.
> ColNumber:   1  2  3  ...16
> Datatypes:
>
> "numeric","numeric","numeric","numeric","numeric","numeric","character","numeric","numeric","character","character","numeric","numeric","numeric","numeric","numeric","numeric","numeric"
>
> Variable (2) which is numeric and variables denoted as character are to be
> treated as dummy variables in the regression.
>
> Search in R help list  suggested I can use read.csv with colClasses option
> also instead of using scan() and then converting it to dataframe as you
> suggested. I am trying both these methods but unable to resolve syntactical
> error.
>
> >coltypes<-
> c("numeric","factor","numeric","numeric","numeric","numeric","factor","numeric","numeric","factor","factor","numeric","numeric","numeric","numeric","numeric","numeric","numeric")
>
> >mydf <- read.csv("C:/temp/data.csv", header=FALSE, colClasses = coltypes,
> strip.white=TRUE)
>
> ERROR: Error in scan(file = file, what = what, sep = sep, quote = quote,
> dec = dec,  :
> scan() expected 'a real', got 'V1'
>
> No idea whats the problem.
>
> AS PER YOUR SUGGESTION I TRIED scan() as follows:
>
>
> >coltypes<-c("numeric","factor","numeric","numeric","numeric","numeric","factor","numeric","numeric","factor","factor","numeric","numeric","numeric","numeric","numeric","numeric","numeric")
> >x<-scan(file = 
> >"C:/temp/data.dbf",what=as.list(coltypes),sep=",",quiet=TRUE,skip=1)
>
> >names(x)<-scan(file = "C:/temp/data.dbf",what="",nlines=1, sep=",")
> >x<-as.data.frame(x)
>
> This is working fine but x has no data in it and contains
> > x
>
>  [1] X._.   NA.NA..1  NA..2  NA..3  NA..4  NA..5  NA..6  NA..7  NA..8
> NA..9  NA..10 NA..11
> [14] NA..12 NA..13 NA..14 NA..15 NA..16
> <0 rows> (or 0-length row.names)
>
> Please let me know how to properly use scan or colClasses option.
>
> Sachin
>
>
>
>
>
> *Mark Stephens <[EMAIL PROTECTED]>* wrote:
>
> Sachin,
> With your dummies stored as integer, the size of your object would appear
> to be 35 * (4*250 + 8*16) bytes = 376MB.
> You said "PC" but did not provide R version information, assuming windows
> then ...
> With 1GB RAM you should be able to load a 376MB object into memory. If you
> can store the dummies as 'raw' then object size is only 126MB.
> You don't say how you attempted to load the data. Assuming your input data
> is in text file (or can be) have you tried scan()? Setup the 'what'
> argument
> with length 266 and make sure the dummy column are set to integer() or
> raw(). Then x = scan(...); class(x)=" data.frame".
> What is the result of memory.limit()? If it is 256MB or 512MB, then try
> starting R with --max-mem-size=800M (I forget the syntax exactly). Leave a
> bit of room below 1GB. Once the object is in memory R may need to copy it
> once, or a few times. You may need to close all other apps in memory, or
> send them to swap.
> I don't really see why your data should not fit into the memory you have.
> Purchasing an extra 1GB may help. Knowing the object size calculation (as
> above) should help you guage whether it is worth it.
> Have you used process monitor to see the memory growing as R loads the
> data? This can be useful.
> If all the above fails, then consider 64

Re: [R] trellis.par.get without opening a device?

2006-04-25 Thread Deepayan Sarkar
On 4/24/06, Dieter Menne <[EMAIL PROTECTED]> wrote:
> I am using the Deepayan's Sweave trick to set graphics parameters for all
> graphs:
>
> ltheme = canonical.theme(color=TRUE)
> sup = trellis.par.get("superpose.line")
> ltheme$superpose.line$col = c('black',"red","blue","#e3","green",
> "gray")
> 

Why do you need to call trellis.par.get? I don't see you using 'sup'
(maybe in the ... part), and I don't see why you would need to.

> Works perfectly, there is only a minor nuissance that trellis.par.get opens
> a device every time, producing a dummy Rplots.ps file or a window (when run
> after Stangle).

An Rplots.ps may still be produced by Sweave (I forget the reason, but
something to do with the need for an explicit print), but shouldn't
happen when running the R code separately.

Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Spencer Graves


hadley wickham wrote:

>>Isn't Other > Contributed Documentation sufficient? Usability guidelines

> However, this is heuristic for the number of top-level categories -
> there is no reason why there could not be a direct link to contributed
> documentation from the home page.

I find it a minor inconvenience to have to select a CRAN mirror before I 
can look at the on-line documentation for contributed packages.  Might 
it make sense to store the last CRAN mirror used in a cookie and offer 
that a the default when you want to transfer people to a mirror?  (I 
don't think it should be totally hidden from the user, because 
occasionally one CRAN mirror will have problems that don't affect 
others.)  Again, I wouldn't have anyone change this "just for me", but 
if you thought it might make the web site more friendly for others (AND 
it could be fairly easily done), then you might consider it.

  Best wishes, spencer graves

> Hadley
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread hadley wickham
> Isn't Other > Contributed Documentation sufficient? Usability guidelines
> for websites suggest that you should have as few top-level menu items as
> possible, say 5-6 max... OK the R website is not like
> .com type website, but you wouldn't want to flood
> users with too many options up front.

7 +/- 2 is the number that is usually bandied about (I think based on
the number of items most people can hold in their short term memory)

However, this is heuristic for the number of top-level categories -
there is no reason why there could not be a direct link to contributed
documentation from the home page.

Hadley

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Heteroskedasticity in Tobit models

2006-04-25 Thread Thomas Lumley
On Tue, 25 Apr 2006, Alan Spearot wrote:

> Hello,
>
> I've had no luck finding an R package that has the ability to estimate a
> Tobit model allowing for heteroskedasticity (multiplicative, for example).
> Am I missing something in survReg?  Is there another package that I'm
> unaware of?  Is there an add-on package that will test for
> heteroskedasticity?

If you mean survreg() [rather than survReg(), which is in S-PLUS] then it 
can estimate models where the variance depends on a discrete covariate  by 
adding a strata() term to the model formula. For example:

> survreg(Surv(futime, fustat) ~ ecog.ps+strata(rx), data = ovarian,
+ dist = "weibull")
Call:
survreg(formula = Surv(futime, fustat) ~ ecog.ps + strata(rx),
 data = ovarian, dist = "weibull")

Coefficients:
(Intercept) ecog.ps
   8.0159674  -0.5940253

Scale:
  rx=1  rx=2
1.2047759 0.5605876

Loglik(model)= -96.2   Loglik(intercept only)= -97.1
 Chisq= 1.68 on 1 degrees of freedom, p= 0.2
n= 26

is a weibull model with different variance depending on the value of rx.


-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Spencer Graves
Hi, Gabor:  
Gabor Grothendieck wrote:

> On Windows, right click the web page, choose Properties and
> copy the url there.

That works, and I will use it in the future.  Thanks.

However, if the subject is not "educating Spencer Graves" but "how to 
make "www.r-project.org" more user friendly, then it still might help to 
display as "Address" the actual web address of the archive page rather 
than "www.r-project.org".  It may not look as pretty, but I'm for 
function first and cosmetics only if they don't interfere with 
functionality.

  Best Wishes,
  spencer graves
> 
> On 4/25/06, Spencer Graves <[EMAIL PROTECTED]> wrote:
> 
>>
>>
>>hadley wickham wrote:
>>
>>
The R Web site is working fine. 
>>>
>>>As an experienced user of the R website, this probably is true for
>>>you.  However, there are a number of confusing problems for new users
>>>of the site:
>>>
>>> * how do you download R?
>>> * how do you bookmark a specific page?
>>
>>*** If I find something with "R Site Search" on "www.r-project.org", I
>>can NOT just copy the web address into an email, because the "address"
>>is still "www.r-project.org".  However, if I use RSiteSearch from within
>> R, I get an honest address (like
>>"http://finzi.psych.upenn.edu/R/Rhelp02a/archive/47417.html";), which I
>>can then paste into an email like this.
>>
>> If it weren't too difficult to display the address for each item
>>retrieved from the archives, it would make it easier it use "R Site
>>Search" without opening R.
>>
>> Thanks to all the core R team, including Jonathan Baron, whose
>>support of "R Site Search" has prevented me from "tearing my hair out"
>>on many occasions (and I don't have much left to tear out).  When people
>>ask me questions about S-Plus, I often go to "R Site Search", and then
>>see if I can somehow use in S-Plus any R solution I find.
>>
>> Best Wishes,
>> spencer graves
>>p.s.  I also have a strong preference for avoiding fancy features.  I've
>>been burned so many times with viruses and software that never performed
>>as advertized for many unknown reasons that I routinely check "no" when
>>asked if I want to install "Micromedia Flash", and I hope I won't have
>>to install it to use a future version of "www.r-project.org".
>>
>>
>>> * what is that giant graphic on the home page?
>>>
>>>Hadley
>>>
>>>__
>>>R-help@stat.math.ethz.ch mailing list
>>>https://stat.ethz.ch/mailman/listinfo/r-help
>>>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>>
>>__
>>R-help@stat.math.ethz.ch mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-help
>>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 
>>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Heteroskedasticity in Tobit models

2006-04-25 Thread roger koenker
Powell's quantile regression method is available in the quantreg
package  rq(..., method="fcen", ...)


url:www.econ.uiuc.edu/~rogerRoger Koenker
email[EMAIL PROTECTED]Department of Economics
vox: 217-333-4558University of Illinois
fax:   217-244-6678Champaign, IL 61820


On Apr 25, 2006, at 2:07 PM, Alan Spearot wrote:

> Hello,
>
> I've had no luck finding an R package that has the ability to  
> estimate a
> Tobit model allowing for heteroskedasticity (multiplicative, for  
> example).
> Am I missing something in survReg?  Is there another package that I'm
> unaware of?  Is there an add-on package that will test for
> heteroskedasticity?
>
> Thanks for your help.
>
> Cheers,
> Alan Spearot
>
> --
> Alan Spearot
> Department of Economics
> University of Wisconsin - Madison
> [EMAIL PROTECTED]
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting- 
> guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] regression modeling

2006-04-25 Thread Weiwei Shi
i believe it is not a question only related to regression modeling. The
correlation between the sample size and confidence of prediction in data
mining is not as clear as traditional stat approach.  My concern is not in
that theoretical discussion but more practical, looking for a good algorithm
when response variable is continuous when large dataset is concerned.

On 4/25/06, bogdan romocea <[EMAIL PROTECTED]> wrote:
>
> There is an aspect, worthy of careful consideration, you don't seem to
> be aware of. I'll ask the question for you: How does the
> explanatory/predictive potential of a dataset vary as the dataset gets
> larger and larger?
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> > Sent: Monday, April 24, 2006 12:45 PM
> > To: r-help
> > Subject: [R] regression modeling
> >
> > Hi, there:
> > I am looking for a regression modeling (like regression
> > trees) approach for
> > a large-scale industry dataset. Any suggestion on a package
> > from R or from
> > other sources which has a decent accuracy and scalability? Any
> > recommendation from experience is highly appreciated.
> >
> > Thanks,
> >
> > Weiwei
> >
> > --
> > Weiwei Shi, Ph.D
> >
> > "Did you always know?"
> > "No, I did not. But I believed..."
> > ---Matrix III
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html
> >
>



--
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Heteroskedasticity in Tobit models

2006-04-25 Thread Alan Spearot
Hello,

I've had no luck finding an R package that has the ability to estimate a
Tobit model allowing for heteroskedasticity (multiplicative, for example).
Am I missing something in survReg?  Is there another package that I'm
unaware of?  Is there an add-on package that will test for
heteroskedasticity?

Thanks for your help.

Cheers,
Alan Spearot

--
Alan Spearot
Department of Economics
University of Wisconsin - Madison
[EMAIL PROTECTED]

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Manuel López-Ibáñez
Barry Rowlingson wrote:
>   The frame-based nature of the CRAN pages is slightly problematic, 
> since you click on a menu item and the URL doesn't change. Hence there's 
> no way to send someone a URL that gives them the same view as you'd get 
> if you go to the home page and then click on 'screenshots', for example.
> 
> Sure you can send them to:
> 
> http://www.r-project.org/screenshots/screenshots.html
> 
> but then they dont see the menu.
> 

Moreover, the menu gets lost also when an element of the menu is opened 
in a new window (or tab).

> It probably wouldn't take long to bash out a serviceable replacement 
> using something like HTML::Mason but then you'd have to find a hosting 
> provider that supported it (or PHP or IYFTLH[1]). I dont think it 
> warrants a full-on CMS given the size of www.r-project.org (not 
> including CRAN stuff). I'd just hack up some m4 scripts and 'include' 
> the menu into a flat file.
> 

I have a perl script that reads a number of HTML files looking for 
"include directories" which instruct it to put at that point the content 
of another file.

For example, file "index.html" may contain the line:



The script replaces that line with the content of "menu.htm".

Thus, the files from the site are processed before being uploaded to the 
web site in order to create the final html pages. There is no need for 
Apache directives, no scripts, no PHP, no CMS. And more important, no 
frames!

Here is the script. You may consider it GPL ;)

#!/usr/bin/perl -w

@files = `ls *.html`;
$outdir = "../";

`mkdir -p $outdir`; #or die "Cannot create $outdir";

FILE: foreach $file (@files) {
 $file =~ s/\n//g;
 open(INPUT, "<$file") or (warn("*** Cannot read $file\n") and next 
FILE);
 @buffer = ;
 close INPUT;

   LINE: foreach $line (@buffer) {
if($line =~ //) {
open(INPUT, "<$1") or
(warn("*** Cannot read $1 included in $file\n") and next LINE);
$temp = join("", );
close INPUT;
$line =~ s[]
[\n$temp];
print "\t$1 included in $file\n";
}
 }
 open(OUTPUT, ">$outdir/$file") or
 (warn("*** Cannot write $outdir/$file\n") and next FILE);
 print OUTPUT @buffer;
 close OUTPUT;
 print "$file processed\n";
}



__ 
LLama Gratis a cualquier PC del Mundo. 
Llamadas a fijos y móviles desde 1 céntimo por minuto. 
http://es.voice.yahoo.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Gabor Grothendieck
On Windows, right click the web page, choose Properties and
copy the url there.

On 4/25/06, Spencer Graves <[EMAIL PROTECTED]> wrote:
> 
>
> hadley wickham wrote:
>
> >>The R Web site is working fine. 
> >
> > As an experienced user of the R website, this probably is true for
> > you.  However, there are a number of confusing problems for new users
> > of the site:
> >
> >  * how do you download R?
> >  * how do you bookmark a specific page?
>
> *** If I find something with "R Site Search" on "www.r-project.org", I
> can NOT just copy the web address into an email, because the "address"
> is still "www.r-project.org".  However, if I use RSiteSearch from within
>  R, I get an honest address (like
> "http://finzi.psych.upenn.edu/R/Rhelp02a/archive/47417.html";), which I
> can then paste into an email like this.
>
>  If it weren't too difficult to display the address for each item
> retrieved from the archives, it would make it easier it use "R Site
> Search" without opening R.
>
>  Thanks to all the core R team, including Jonathan Baron, whose
> support of "R Site Search" has prevented me from "tearing my hair out"
> on many occasions (and I don't have much left to tear out).  When people
> ask me questions about S-Plus, I often go to "R Site Search", and then
> see if I can somehow use in S-Plus any R solution I find.
>
>  Best Wishes,
>  spencer graves
> p.s.  I also have a strong preference for avoiding fancy features.  I've
> been burned so many times with viruses and software that never performed
> as advertized for many unknown reasons that I routinely check "no" when
> asked if I want to install "Micromedia Flash", and I hope I won't have
> to install it to use a future version of "www.r-project.org".
>
> >  * what is that giant graphic on the home page?
> >
> > Hadley
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Spencer Graves


hadley wickham wrote:

>>The R Web site is working fine. 
> 
> As an experienced user of the R website, this probably is true for
> you.  However, there are a number of confusing problems for new users
> of the site:
> 
>  * how do you download R?
>  * how do you bookmark a specific page?

*** If I find something with "R Site Search" on "www.r-project.org", I 
can NOT just copy the web address into an email, because the "address" 
is still "www.r-project.org".  However, if I use RSiteSearch from within 
  R, I get an honest address (like 
"http://finzi.psych.upenn.edu/R/Rhelp02a/archive/47417.html";), which I 
can then paste into an email like this.

  If it weren't too difficult to display the address for each item 
retrieved from the archives, it would make it easier it use "R Site 
Search" without opening R.

  Thanks to all the core R team, including Jonathan Baron, whose 
support of "R Site Search" has prevented me from "tearing my hair out" 
on many occasions (and I don't have much left to tear out).  When people 
ask me questions about S-Plus, I often go to "R Site Search", and then 
see if I can somehow use in S-Plus any R solution I find.

  Best Wishes,
  spencer graves
p.s.  I also have a strong preference for avoiding fancy features.  I've 
been burned so many times with viruses and software that never performed 
as advertized for many unknown reasons that I routinely check "no" when 
asked if I want to install "Micromedia Flash", and I hope I won't have 
to install it to use a future version of "www.r-project.org".

>  * what is that giant graphic on the home page?
> 
> Hadley
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Gabor Grothendieck
Its not hard if you know what to do but if you don't then its a nuisance
to figure it out every time.

On 4/25/06, Gavin Simpson <[EMAIL PROTECTED]> wrote:
> On Tue, 2006-04-25 at 14:09 -0400, Gabor Grothendieck wrote:
> > On 4/25/06, hadley wickham <[EMAIL PROTECTED]> wrote:
> > > > The R Web site is working fine. Even if it is not relifted from a long
> > > > time, it is functional. So, this is the point... and it should remain,
> > > > at least, as functional as it is.
> > >
> > > As an experienced user of the R website, this probably is true for
> > > you.  However, there are a number of confusing problems for new users
> > > of the site:
> > >
> > >  * how do you download R?
> > >  * how do you bookmark a specific page?
> > >  * what is that giant graphic on the home page?
> >
> > * can't get to contributed docs directly from home page
>
> Isn't Other > Contributed Documentation sufficient? Usability guidelines
> for websites suggest that you should have as few top-level menu items as
> possible, say 5-6 max... OK the R website is not like
> .com type website, but you wouldn't want to flood
> users with too many options up front.
>
> G
>
> --
> %~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
> *  Note new Address, Telephone & Fax numbers from 6th April 2006  *
> %~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
> Gavin Simpson
> ECRC & ENSIS  [t] +44 (0)20 7679 0522
> UCL Department of Geography   [f] +44 (0)20 7679 0565
> Pearson Building  [e] gavin.simpsonATNOSPAMucl.ac.uk
> Gower Street  [w] http://www.ucl.ac.uk/~ucfagls/cv/
> London, UK.   [w] http://www.ucl.ac.uk/~ucfagls/
> WC1E 6BT.
> %~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Gavin Simpson
On Tue, 2006-04-25 at 14:09 -0400, Gabor Grothendieck wrote:
> On 4/25/06, hadley wickham <[EMAIL PROTECTED]> wrote:
> > > The R Web site is working fine. Even if it is not relifted from a long
> > > time, it is functional. So, this is the point... and it should remain,
> > > at least, as functional as it is.
> >
> > As an experienced user of the R website, this probably is true for
> > you.  However, there are a number of confusing problems for new users
> > of the site:
> >
> >  * how do you download R?
> >  * how do you bookmark a specific page?
> >  * what is that giant graphic on the home page?
> 
> * can't get to contributed docs directly from home page

Isn't Other > Contributed Documentation sufficient? Usability guidelines
for websites suggest that you should have as few top-level menu items as
possible, say 5-6 max... OK the R website is not like
.com type website, but you wouldn't want to flood
users with too many options up front.

G

-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
*  Note new Address, Telephone & Fax numbers from 6th April 2006  *
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson 
ECRC & ENSIS  [t] +44 (0)20 7679 0522
UCL Department of Geography   [f] +44 (0)20 7679 0565
Pearson Building  [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street  [w] http://www.ucl.ac.uk/~ucfagls/cv/
London, UK.   [w] http://www.ucl.ac.uk/~ucfagls/
WC1E 6BT.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Gabor Grothendieck
On 4/25/06, hadley wickham <[EMAIL PROTECTED]> wrote:
> > The R Web site is working fine. Even if it is not relifted from a long
> > time, it is functional. So, this is the point... and it should remain,
> > at least, as functional as it is.
>
> As an experienced user of the R website, this probably is true for
> you.  However, there are a number of confusing problems for new users
> of the site:
>
>  * how do you download R?
>  * how do you bookmark a specific page?
>  * what is that giant graphic on the home page?

* can't get to contributed docs directly from home page

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Questions to RDCOMClient

2006-04-25 Thread Gabor Grothendieck
On 4/25/06, Dr. Michael Wolf <[EMAIL PROTECTED]> wrote:
>
> 3. RDCOMCLient and Excel Manual
> ===
>
> Do you know a good overview of using Excel VBA code via RDCOMClient (e. g.
> sh$Select())? Are there people interesting in working out such a paper? I
> could contribute some experiences of my work to such a project (e. g.
> deleting Excel shapes from R and copying new charts made by R to a special
> position in a Excel sheet.

Normally what I do is just create whatever spreadsheet I want in Excel
with the Excel macro recorder turned on and then look at the macro output
and translate that to RDCOMClient.  There do exist some books on
VBA programming in Excel (I don't have any myself but have taken one
out from the library once) that could be helpful if the macro approach is
not sufficient.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] pnorm2

2006-04-25 Thread Adelchi Azzalini
On Mon, Apr 24, 2006 at 03:26:14PM +0100, Tolga Uzuner wrote:
> Hi,
> 
> Has pnorm2 been dropped from sn ? Have some code using it which seems to 
> failt with a new sn update...
> 
> Thanks,
> Tolga
> 

yes, I have dropped it, since pmnorm() of package mnormt 
provides the same facility in far more general form

..in fact I never realized pnorm2 had been used by someone :-)

if for some reason you prefer not to load mnormt, 
then I can put back pnorm2 in sn


best wishes,

Adelchi Azzalini
-- 
Adelchi Azzalini  <[EMAIL PROTECTED]>
Dipart.Scienze Statistiche, Università di Padova, Italia
tel. +39 049 8274147,  http://azzalini.stat.unipd.it/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Philippe Grosjean
OK, I try...
What do you think about the fonts and styles of titles versus text in... 
http://wiki.r-project.org?

This is obviously in the line of: would we change the styles in the CSS 
file (while we are replacing frames with appropriate styles too)?

May be am I a little confused. Normal. After 14 hours continuous 
programming on R, I think that everybody starts to present serious 
troubles :-(

OK... back to work. I still have 3 or 4 hours to finish this $¤!§&çà 
program...

Philippe Grosjean

Jonathan Baron wrote:
> I volunteer to attempt this, but only after I get my grades in
> (May 8).  If it gets done by someone else before that, I'll be
> happy.
> 
> Don't worry.  It won't look like my personal page, or even my R
> page.  But I do know quite a bit about CSS.
> 
> On 04/25/06 12:37, Dirk Eddelbuettel wrote:
> 
>>On 25 April 2006 at 13:18, Jonathan Baron wrote:
>>| The only thing I might change is to replace the frames with some
>>| sort of CSS-based positioning.
>>
>>Yes please!
>>
>>Dirk
>>
>>--
>>Hell, there are no rules here - we're trying to accomplish something.
>>  -- Thomas A. Edison
> 
> 
> I love this quote.  He really did say something like it.
> 
> Jon

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread bogdan romocea
I agree it would be worthwhile to make some cosmetic changes to
r-project.org (nothing fancy though - no javascript, Flash etc). The
general public may not be fully aware of how R compares to other
statistical software, and I doubt that a web site which looks like it
was put together 10 years ago helps bend the perceptions in the right
direction. (Also, can someone finally change the graph on the first
page??)


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of roger bos
> Sent: Tuesday, April 25, 2006 1:09 PM
> To: Romain Francois
> Cc: RHELP
> Subject: Re: [R] www.r-project.org
>
> While there is nothing about the r-project site that I would
> consider fancy,
> it is pretty functional.  I would be interested to hear more
> about what you
> hope to accomplish by re-doing the web site.  Fancy graphics
> may just slow
> down the experience for those not on broadband.  After all,
> the r-help list
> doesn't even like HTML in email, so it may not like too many
> fancy stuff on
> their website either.
>
>
>
>
> On 4/25/06, Romain Francois <[EMAIL PROTECTED]> wrote:
> >
> > Dear R users and developpers,
> >
> > My question is adressed to both of you, so I choose R-help
> to post it.
> >
> > Are there any plans to jazz up the main R website :
> > http://www.r-project.org
> > The look it have now is the same for a long time and kind of sad
> > compared to other statistical package's website. Of course, the
> > comparison is not fair, since companies are paying web
> designers to draw
> > lollipop websites ...
> >
> > My first idea was to organize some kind of web designing contest.
> > But, I had a small talk with Friedrich Leisch about that,
> who said that
> > I shouldn't expect too many competitors.
> > So, what about creating a small team, create a home page project and
> > then propose it to the core team.
> > It goes without saying it : The core team has the final word.
> >
> > What do you think ? Who would like to play ?
> >
> > Romain
> >
> > --
> > visit the R Graph Gallery : http://addictedtor.free.fr/graphiques
> > mixmod 1.7 is released :
> http://www-math.univ-fcomte.fr/mixmod/index.php
> > +---+
> > | Romain FRANCOIS - http://francoisromain.free.fr   |
> > | Doctorant INRIA Futurs / EDF  |
> > +---+
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html
> >
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Questions to RDCOMClient

2006-04-25 Thread Dr. Michael Wolf
Dear list members,

I'm using R in connection with the RDCOMClient and Excel. The more I use the
package, the more I'm fascinated of it. The possibilities of R can be
brought together with the necessities of outputing my socio-economical
research results in MS Office!

But I have some special questions concering the use of RDCOMClient and
perhaps you can help me solving them:

1. Problems with closing the COM-applications
=

My R procedure structure looks as follows:

# loading the packages
library(RDCOMClient)
source(system.file("examples", "excelUtils.r", package="RDCOMClient"))

# opening the Excel file
dnXls <- paste(pfArb, "/RVP#StO-Analyse_VersVerw.xls", sep="")  #
Xls-Dateiname + Pfad
xls <- COMCreate("Excel.Application")
xls[["Workbooks"]]$Open("c:\tmp\test.xls)

sh <- xls$Sheets("Tab.1")
sh$Select()

::: [ here is comming R code in order to produce output like tables,
thematical map charts and
:::   transfering them to Excel via RDCOMClient / everything is working
fine!!! ... ]

# 'shutting down' Excel

xls[["Visible"]] <- TRUE
xls[["Workbooks"]]$Close()
xls$Quit()
rm(list=c("xls", "sh"))
gc()


The same way is used by serveral examples in the package.  So far so fine!
But sometimes when I try to open the Excel file afterwards from the MS
Explorer, Excel seems to open but didn't show the file I clicked to open.
When opening the task manager I offen can find a Excel process running -
even if I closed Excel with the procedure above and didn't try to reopen
Excel via MS Explorer. What's going wrong in my procedure? Why the Excel
process isn't finished?


2. RDCOMCLient and Windows 98
=

At home I'm sometimes using my old computer with Windows 98 and Excel 2000
(that may seem funny to manny of yours but I use my old "babe" for Internet
surfing and sometimes I have to run some R  procedure!). When trying to use
this operating system in connection with RDCOMCLient I get a warning message
that the DLL tries to change a FPU control word from 8001f to 9001f (this is
the English translation of the German message!). I set the path to %R_HOME%
and also the variable R_HOME to the bin dicertory of R in the autoexec.bat
file. So why doesn't RDCOMCLient run? Does this package work together with
Windows 98 or did I forget some steps when installing the RDCOMClient
package? (Please don't send any answers that I should use a computer with a
new OS. I still do - mostly!)


3. RDCOMCLient and Excel Manual
===

Do you know a good overview of using Excel VBA code via RDCOMClient (e. g.
sh$Select())? Are there people interesting in working out such a paper? I
could contribute some experiences of my work to such a project (e. g.
deleting Excel shapes from R and copying new charts made by R to a special
position in a Excel sheet.


Thanks to you and your hints in advance!

Greetings from Germany

Dr. Michael Wolf
Von-Schonebeck-Ring 1848161 Münster
Tel.:   02533/2466
E-Mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread hadley wickham
> The R Web site is working fine. Even if it is not relifted from a long
> time, it is functional. So, this is the point... and it should remain,
> at least, as functional as it is.

As an experienced user of the R website, this probably is true for
you.  However, there are a number of confusing problems for new users
of the site:

 * how do you download R?
 * how do you bookmark a specific page?
 * what is that giant graphic on the home page?

Hadley

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Dirk Eddelbuettel

On 25 April 2006 at 13:18, Jonathan Baron wrote:
| The only thing I might change is to replace the frames with some
| sort of CSS-based positioning.  

Yes please!  

Dirk

-- 
Hell, there are no rules here - we're trying to accomplish something. 
  -- Thomas A. Edison

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Jonathan Baron
I volunteer to attempt this, but only after I get my grades in
(May 8).  If it gets done by someone else before that, I'll be
happy.

Don't worry.  It won't look like my personal page, or even my R
page.  But I do know quite a bit about CSS.

On 04/25/06 12:37, Dirk Eddelbuettel wrote:
> 
> On 25 April 2006 at 13:18, Jonathan Baron wrote:
> | The only thing I might change is to replace the frames with some
> | sort of CSS-based positioning.
> 
> Yes please!
> 
> Dirk
> 
> --
> Hell, there are no rules here - we're trying to accomplish something.
>   -- Thomas A. Edison

I love this quote.  He really did say something like it.

Jon
-- 
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron
R search page: http://finzi.psych.upenn.edu/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Clint Bowman
I must agree with Jon--the site is clean and material is easily found and
readily available.  I wouldn't want changes which, while visually
stimulating, would detract from the clarity of presentation.

What I don't see is a way for the visitor to contact the Web page
maintainer to comment or suggest changes.

Clint

Clint BowmanINTERNET:   [EMAIL PROTECTED]
Air Dispersion Modeler  INTERNET:   [EMAIL PROTECTED]
Air Quality Program VOICE:  (360) 407-6815
Department of Ecology   FAX:(360) 407-7534

USPS:   PO Box 47600, Olympia, WA 98504-7600
Parcels:300 Desmond Drive, Lacey, WA 98503-1274

On Tue, 25 Apr 2006, Jonathan Baron wrote:

> On 04/25/06 18:53, Romain Francois wrote:
> > Dear R users and developpers,
> >
> > My question is adressed to both of you, so I choose R-help to post it.
> >
> > Are there any plans to jazz up the main R website : http://www.r-project.org
> > The look it have now is the same for a long time and kind of sad
> > compared to other statistical package's website. Of course, the
> > comparison is not fair, since companies are paying web designers to draw
> > lollipop websites ...
>
> I don't think it is sad at all.  It think it is one of the few
> sites I visit that is accessible, is quick to load, conforms to
> standards, uses my fonts instead of forcing me to get nose prints
> on the monitor, is informative, has minimal mindless glitz, and
> works in any browser.
>
> The only thing I might change is to replace the frames with some
> sort of CSS-based positioning.  HOWEVER, the new version of
> Internet Explorer may totally destroy the usefulness of CSS, so
> maybe it is better to leave things as they are for now.
>
> Jon
> --
> Jonathan Baron, Professor of Psychology, University of Pennsylvania
> Home page: http://www.sas.upenn.edu/~baron
> Editor: Judgment and Decision Making (http://journal.sjdm.org)
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Barry Rowlingson
roger bos wrote:
> While there is nothing about the r-project site that I would consider fancy,
> it is pretty functional.  I would be interested to hear more about what you
> hope to accomplish by re-doing the web site.  Fancy graphics may just slow
> down the experience for those not on broadband.  After all, the r-help list
> doesn't even like HTML in email, so it may not like too many fancy stuff on
> their website either.
> 

  The frame-based nature of the CRAN pages is slightly problematic, 
since you click on a menu item and the URL doesn't change. Hence there's 
no way to send someone a URL that gives them the same view as you'd get 
if you go to the home page and then click on 'screenshots', for example.

Sure you can send them to:

http://www.r-project.org/screenshots/screenshots.html

but then they dont see the menu.

Frames make for simplification of page creation (the menu is in one HTML 
file and doesn't need to be included on every page) at the expense of 
usability. Template and content management systems solved this a while ago.

It probably wouldn't take long to bash out a serviceable replacement 
using something like HTML::Mason but then you'd have to find a hosting 
provider that supported it (or PHP or IYFTLH[1]). I dont think it 
warrants a full-on CMS given the size of www.r-project.org (not 
including CRAN stuff). I'd just hack up some m4 scripts and 'include' 
the menu into a flat file.

Perhaps someone could write a web site template system in R...

Another option would be to make it completely web 2.0, round the 
corners, write some ajax, add some blog links, tag soup section[2]

Barry

[1] Insert Your Favourite Template Language Here

[2] Joke

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Philippe Grosjean
Romain,
The R Web site is working fine. Even if it is not relifted from a long 
time, it is functional. So, this is the point... and it should remain, 
at least, as functional as it is.

One aspect that could be easily relooked is the CSS file. I would 
definitely be in favor of a more styled CSS. I mean, there are now new 
fonts around that are designed to be more readable than Times, Helvetica 
and Courrier on the screen, and equally fine on the printed material. 
With CSS, it is always possible to define several fonts for one style, 
so that the style "degrades" nicely in case of missing fonts. So, such 
kind of change is safe, even for very old computers.

It would be wonderful if we could get a more actual CSS file for R doc, 
for the Web site, and I would use the same for the R Wiki. That way, we 
will got homogeneity in the presentation.

So, I definitely encourage you for (microchirurgical) propositions to 
actualize the presentation of the R Web site, and I will follow the 
decision of the R Core Team on this topic to make the R Wiki looking 
similar.

Best,

Philippe Grosjean

Romain Francois wrote:
> Dear R users and developpers,
> 
> My question is adressed to both of you, so I choose R-help to post it.
> 
> Are there any plans to jazz up the main R website : http://www.r-project.org
> The look it have now is the same for a long time and kind of sad 
> compared to other statistical package's website. Of course, the 
> comparison is not fair, since companies are paying web designers to draw 
> lollipop websites ...
> 
> My first idea was to organize some kind of web designing contest.
> But, I had a small talk with Friedrich Leisch about that, who said that 
> I shouldn't expect too many competitors.
> So, what about creating a small team, create a home page project and 
> then propose it to the core team.
> It goes without saying it : The core team has the final word.
> 
> What do you think ? Who would like to play ?
> 
> Romain
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread hadley wickham
Hi Romain,

Generally a competition is a bad way of coming up with a new design. 
It generally emphasises looks over function and, of course, requires
people to put in a lot of effort for a small chance of gain. I'd also
agree with Friedrich that few people will enter.

My opinion is if that people thought it was truly important to have a
better R homepage, it would be better to start a collection and hire a
professional developer.  Otherwise, there is a very real risk that we
will end up with something worse than the current.

Hadley

On 4/25/06, Romain Francois <[EMAIL PROTECTED]> wrote:
> Dear R users and developpers,
>
> My question is adressed to both of you, so I choose R-help to post it.
>
> Are there any plans to jazz up the main R website : http://www.r-project.org
> The look it have now is the same for a long time and kind of sad
> compared to other statistical package's website. Of course, the
> comparison is not fair, since companies are paying web designers to draw
> lollipop websites ...
>
> My first idea was to organize some kind of web designing contest.
> But, I had a small talk with Friedrich Leisch about that, who said that
> I shouldn't expect too many competitors.
> So, what about creating a small team, create a home page project and
> then propose it to the core team.
> It goes without saying it : The core team has the final word.
>
> What do you think ? Who would like to play ?
>
> Romain
>
> --
> visit the R Graph Gallery : http://addictedtor.free.fr/graphiques
> mixmod 1.7 is released : http://www-math.univ-fcomte.fr/mixmod/index.php
> +---+
> | Romain FRANCOIS - http://francoisromain.free.fr   |
> | Doctorant INRIA Futurs / EDF  |
> +---+
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] NA in dummy regression coefficients

2006-04-25 Thread Sachin J
I'm running a regression model with dummy variables and getting NA 
for some coefficients. I believe this due to singularity problem. How can I 
exclude some of the dummy variables from the regression model in R to take care 
of this issue. I read in R help that lm() method takes care of this issue 
automatically. But in my case its not happening? Any pointers would be of great 
help.
   
  Regression Model: 
   
  reg06 <- lm(mydf$y~ mydf$x1 + factor(mydf$x2) + factor(mydf$x3)+ 
factor(mydf$x4) +  mydf$x5, singular.ok = TRUE)
   
  Thanx in advance
   
  Sachin


-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Running R on Windows 2000 Terminal Services

2006-04-25 Thread Gavin Simpson
On Tue, 2006-04-25 at 17:32 +0100, Barry Rowlingson wrote:
> Gavin Simpson wrote:
> > Dear list,
> > 
> > My employer uses a Windows 2000 Terminal Server-based system for its
> > college-wide managed computer service - computers connect directly to
> > the WTS servers for their sessions, using a Citrix ICA client. When I
> > asked them to install R (Windows) on an older version of this service
> > the IT guys installed it but pulled it for performance issues. I am
> > trying to get them to try again but receiving little encouragement from
> > them.
> 
>   'performance issues'? Well, if you have 100 students running MCMC 
> simulations on one Windows 2000 TS box then you may well have 
> 'performance issues'!

I think they meant more along the lines of one user running it - doing
nothing with R as the guys don't know how to use R - was causing issues,
rather than CPU load hitting 100% across all the 4-8 processors per
server (and there are lots of servers)

>   Perhaps the TS service isn't intended for people to do real computer 
> work on, but is just for Office apps. Then you come along and want your 
> students to do serious number crunching. At that point the MS Word 
> writers experience what we used to call 'lag'.

The system has SPSS, various Adobe products (Photoshop & Illustrator)
and tonnes of other apps I would consider more "number crunching" than
R, so I don't think this was a problem. I was deliberately vague as I
don't know what the actual problem was - we aren't allowed to know who
these IT people R but I have asked to speak to one of the WTS people to
see what the problem is. If I turn up anything I'll email R-Devel to see
if this is an R thing or a local thing.

> 
> > Does anyone on the list have experience of a similar set-up? If you do,
> > I could use that as part of my argument to invest some time in sorting
> > these issues out. I really want to get the Windows version of R
> > installed for teaching because at the moment I subject my students to
> > the rather hostile world of an archaic UNIX session to run R - for them
> > at least.
> 
>   We have a couple of labs that are similar - we use Wyse Thin Client 
> Xterminals which boot Thinstation Linux from a server and then connect 
> to Windows 2003 TS machines using RDP or Ubuntu Linux boxes using XDMCP. 
> We dont use Citrix ICA.
> 
>   'Performance issues' will depend very much on what you are doing. As a 
> quick benchmark, last term we had 24 users in a lab all running Windows 
> and running Matlab, Firefox, that kind of stuff. One dual 2.6GHz Xeon 
> Dell with 4G Ram never went above 60% CPU usage. And we had another 
> three similar Dells sitting idle waiting for installation. Sessions with 
> R run regularly in these labs and we've never had 'performance issues'.

Thanks for this Barry - so we aren't talking about an incompatibility
per se with WTS.

>   So possibly your IT support are stalling. Do they regularly say "Have 
> you tried switching it off and on again?" in response to a support query 
> [1]?

Once you speak to the IT guys themselves they are incredibly helpful and
knowledgeable - getting to speak to them is more difficult

> 
> Barry
> 
> [1] Catchphrase of the tech support guys in comedy series 'The IT Crowd'

That was a funny show...

Cheers,

G

-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
*  Note new Address, Telephone & Fax numbers from 6th April 2006  *
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson 
ECRC & ENSIS  [t] +44 (0)20 7679 0522
UCL Department of Geography   [f] +44 (0)20 7679 0565
Pearson Building  [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street  [w] http://www.ucl.ac.uk/~ucfagls/cv/
London, UK.   [w] http://www.ucl.ac.uk/~ucfagls/
WC1E 6BT.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread Jonathan Baron
On 04/25/06 18:53, Romain Francois wrote:
> Dear R users and developpers,
> 
> My question is adressed to both of you, so I choose R-help to post it.
> 
> Are there any plans to jazz up the main R website : http://www.r-project.org
> The look it have now is the same for a long time and kind of sad
> compared to other statistical package's website. Of course, the
> comparison is not fair, since companies are paying web designers to draw
> lollipop websites ...

I don't think it is sad at all.  It think it is one of the few
sites I visit that is accessible, is quick to load, conforms to
standards, uses my fonts instead of forcing me to get nose prints
on the monitor, is informative, has minimal mindless glitz, and
works in any browser.

The only thing I might change is to replace the frames with some
sort of CSS-based positioning.  HOWEVER, the new version of
Internet Explorer may totally destroy the usefulness of CSS, so
maybe it is better to leave things as they are for now.

Jon
--
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron
Editor: Judgment and Decision Making (http://journal.sjdm.org)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] www.r-project.org

2006-04-25 Thread roger bos
While there is nothing about the r-project site that I would consider fancy,
it is pretty functional.  I would be interested to hear more about what you
hope to accomplish by re-doing the web site.  Fancy graphics may just slow
down the experience for those not on broadband.  After all, the r-help list
doesn't even like HTML in email, so it may not like too many fancy stuff on
their website either.




On 4/25/06, Romain Francois <[EMAIL PROTECTED]> wrote:
>
> Dear R users and developpers,
>
> My question is adressed to both of you, so I choose R-help to post it.
>
> Are there any plans to jazz up the main R website :
> http://www.r-project.org
> The look it have now is the same for a long time and kind of sad
> compared to other statistical package's website. Of course, the
> comparison is not fair, since companies are paying web designers to draw
> lollipop websites ...
>
> My first idea was to organize some kind of web designing contest.
> But, I had a small talk with Friedrich Leisch about that, who said that
> I shouldn't expect too many competitors.
> So, what about creating a small team, create a home page project and
> then propose it to the core team.
> It goes without saying it : The core team has the final word.
>
> What do you think ? Who would like to play ?
>
> Romain
>
> --
> visit the R Graph Gallery : http://addictedtor.free.fr/graphiques
> mixmod 1.7 is released : http://www-math.univ-fcomte.fr/mixmod/index.php
> +---+
> | Romain FRANCOIS - http://francoisromain.free.fr   |
> | Doctorant INRIA Futurs / EDF  |
> +---+
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] by() and CrossTable()

2006-04-25 Thread Marc Schwartz (via MN)
That does appear to work.

Thanks for the workaround Gabor.

I'll still be working on the other changes of course to make this more
"natural".

Regards,

Marc

On Tue, 2006-04-25 at 12:34 -0400, Gabor Grothendieck wrote:
> At least for this case I think you could get the effect without modifyiing
> CrossTable like this:
> 
> as.CrossTable <- function(x) structure(x, class = c("CrossTable", class(x)))
> print.CrossTable <- function(x) for(L in x) cat(L, "\n")
> 
> by(warpbreaks, warpbreaks$tension, function(x)
>   as.CrossTable(capture.output(CrossTable(x$wool, x$breaks > 30,
>   format="SPSS", fisher=TRUE
> 
> 
> On 4/25/06, Marc Schwartz (via MN) <[EMAIL PROTECTED]> wrote:
> > On Tue, 2006-04-25 at 11:07 -0400, Chuck Cleland wrote:
> > >I am attempting to produce crosstabulations between two variables for
> > > subgroups defined by a third factor variable.  I'm using by() and
> > > CrossTable() in package gmodels.  I get the printing of the tables first
> > > and then a printing of each level of the INDICES.  For example:
> > >
> > > library(gmodels)
> > >
> > > by(warpbreaks, warpbreaks$tension, function(x){CrossTable(x$wool,
> > > x$breaks > 30, format="SPSS", fisher=TRUE)})
> > >
> > >Is there a way to change this so that the CrossTable() output is
> > > labeled by the levels of the INDICES variable?  I think this has to do
> > > with how CrossTable returns output, because the following does what I 
> > > want:
> > >
> > > by(warpbreaks, warpbreaks$tension, function(x){summary(lm(breaks ~ wool,
> > > data = x))})
> > >
> > > thanks,
> > >
> > > Chuck
> >
> > Chuck,
> >
> > Thanks for your e-mail.
> >
> > Without digging deeper, I suspect that the problem here is that
> > CrossTable() has embedded formatted output within the body of the
> > function using cat(), as opposed to a two step process of creating a
> > results object, which then has a print method associated with it. This
> > would be the case in the lm() example that you have as well as many
> > other functions in R.
> >
> > I had not anticipated this particular use of CrossTable(), since it was
> > really focused on creating nicely formatted 2d tables using fixed width
> > fonts.
> >
> > That being said, I have had recent requests to enhance CrossTable()'s
> > functionality to:
> >
> > 1. Be able to assign the results of the internal processing to an object
> > and be able to assign that object without any other output. For example:
> >
> >  Results <- CrossTable(...)
> >
> > yielding no further output in the console.
> >
> >
> > 2. Facilitate LaTeX markup of the CrossTable() formatted output for
> > inclusion in LaTeX documents.
> >
> >
> > Both of the above would require me to fundamentally alter CrossTable()
> > to create a "CrossTable" class object, as opposed to the current
> > embedded output. I would then create a print.CrossTable() method
> > yielding the current output, as well as one to create LaTeX markup for
> > that application. The LaTeX output would likely need to support the
> > regular 'table' style as well as 'ctable' and 'longtable' styles, the
> > latter given the potential for long multi-page output.
> >
> > These changes should then support the type of use that you are
> > attempting here.
> >
> > These are on my TODO list for CrossTable() (along with the inclusion of
> > the measures of association recently discussed) and now that the dust
> > has settled from some recent abstract submission deadlines I can get
> > back to some of these things. I don't have a timeline yet, but will
> > forge ahead with these enhancements.
> >
> > One possible suggestion for you as an interim, at least in terms of some
> > nicely formatted n-way tables is the ctab() function in the 'catspec'
> > package by John Hendrickx.
> >
> > A possible example call would be:
> >
> > ctab(warpbreaks$tension, warpbreaks$wool, warpbreaks$breaks > 30,
> > type = c("n", "row", "column", "total"), addmargins = TRUE)
> >
> >
> > Unlike CrossTable() which is strictly 2d (though that may change in the
> > future), ctab() directly supports the creation of n-way tables, with
> > counts and percentages/proportions interleaved in the output. There are
> > no statistical tests applied and these would need to be done separately
> > using by().
> >
> >
> > Chuck, feel free to contact me offlist as other related issues may arise
> > or as you have other comments on this.
> >
> > Again, thanks for the e-mail.
> >
> > Best regards,
> >
> > Marc Schwartz
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> >
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__

[R] www.r-project.org

2006-04-25 Thread Romain Francois
Dear R users and developpers,

My question is adressed to both of you, so I choose R-help to post it.

Are there any plans to jazz up the main R website : http://www.r-project.org
The look it have now is the same for a long time and kind of sad 
compared to other statistical package's website. Of course, the 
comparison is not fair, since companies are paying web designers to draw 
lollipop websites ...

My first idea was to organize some kind of web designing contest.
But, I had a small talk with Friedrich Leisch about that, who said that 
I shouldn't expect too many competitors.
So, what about creating a small team, create a home page project and 
then propose it to the core team.
It goes without saying it : The core team has the final word.

What do you think ? Who would like to play ?

Romain

-- 
visit the R Graph Gallery : http://addictedtor.free.fr/graphiques
mixmod 1.7 is released : http://www-math.univ-fcomte.fr/mixmod/index.php
+---+
| Romain FRANCOIS - http://francoisromain.free.fr   |
| Doctorant INRIA Futurs / EDF  |
+---+

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] by() and CrossTable()

2006-04-25 Thread Gabor Grothendieck
At least for this case I think you could get the effect without modifyiing
CrossTable like this:

as.CrossTable <- function(x) structure(x, class = c("CrossTable", class(x)))
print.CrossTable <- function(x) for(L in x) cat(L, "\n")

by(warpbreaks, warpbreaks$tension, function(x)
as.CrossTable(capture.output(CrossTable(x$wool, x$breaks > 30,
format="SPSS", fisher=TRUE


On 4/25/06, Marc Schwartz (via MN) <[EMAIL PROTECTED]> wrote:
> On Tue, 2006-04-25 at 11:07 -0400, Chuck Cleland wrote:
> >I am attempting to produce crosstabulations between two variables for
> > subgroups defined by a third factor variable.  I'm using by() and
> > CrossTable() in package gmodels.  I get the printing of the tables first
> > and then a printing of each level of the INDICES.  For example:
> >
> > library(gmodels)
> >
> > by(warpbreaks, warpbreaks$tension, function(x){CrossTable(x$wool,
> > x$breaks > 30, format="SPSS", fisher=TRUE)})
> >
> >Is there a way to change this so that the CrossTable() output is
> > labeled by the levels of the INDICES variable?  I think this has to do
> > with how CrossTable returns output, because the following does what I want:
> >
> > by(warpbreaks, warpbreaks$tension, function(x){summary(lm(breaks ~ wool,
> > data = x))})
> >
> > thanks,
> >
> > Chuck
>
> Chuck,
>
> Thanks for your e-mail.
>
> Without digging deeper, I suspect that the problem here is that
> CrossTable() has embedded formatted output within the body of the
> function using cat(), as opposed to a two step process of creating a
> results object, which then has a print method associated with it. This
> would be the case in the lm() example that you have as well as many
> other functions in R.
>
> I had not anticipated this particular use of CrossTable(), since it was
> really focused on creating nicely formatted 2d tables using fixed width
> fonts.
>
> That being said, I have had recent requests to enhance CrossTable()'s
> functionality to:
>
> 1. Be able to assign the results of the internal processing to an object
> and be able to assign that object without any other output. For example:
>
>  Results <- CrossTable(...)
>
> yielding no further output in the console.
>
>
> 2. Facilitate LaTeX markup of the CrossTable() formatted output for
> inclusion in LaTeX documents.
>
>
> Both of the above would require me to fundamentally alter CrossTable()
> to create a "CrossTable" class object, as opposed to the current
> embedded output. I would then create a print.CrossTable() method
> yielding the current output, as well as one to create LaTeX markup for
> that application. The LaTeX output would likely need to support the
> regular 'table' style as well as 'ctable' and 'longtable' styles, the
> latter given the potential for long multi-page output.
>
> These changes should then support the type of use that you are
> attempting here.
>
> These are on my TODO list for CrossTable() (along with the inclusion of
> the measures of association recently discussed) and now that the dust
> has settled from some recent abstract submission deadlines I can get
> back to some of these things. I don't have a timeline yet, but will
> forge ahead with these enhancements.
>
> One possible suggestion for you as an interim, at least in terms of some
> nicely formatted n-way tables is the ctab() function in the 'catspec'
> package by John Hendrickx.
>
> A possible example call would be:
>
> ctab(warpbreaks$tension, warpbreaks$wool, warpbreaks$breaks > 30,
> type = c("n", "row", "column", "total"), addmargins = TRUE)
>
>
> Unlike CrossTable() which is strictly 2d (though that may change in the
> future), ctab() directly supports the creation of n-way tables, with
> counts and percentages/proportions interleaved in the output. There are
> no statistical tests applied and these would need to be done separately
> using by().
>
>
> Chuck, feel free to contact me offlist as other related issues may arise
> or as you have other comments on this.
>
> Again, thanks for the e-mail.
>
> Best regards,
>
> Marc Schwartz
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Running R on Windows 2000 Terminal Services

2006-04-25 Thread Barry Rowlingson
Gavin Simpson wrote:
> Dear list,
> 
> My employer uses a Windows 2000 Terminal Server-based system for its
> college-wide managed computer service - computers connect directly to
> the WTS servers for their sessions, using a Citrix ICA client. When I
> asked them to install R (Windows) on an older version of this service
> the IT guys installed it but pulled it for performance issues. I am
> trying to get them to try again but receiving little encouragement from
> them.

  'performance issues'? Well, if you have 100 students running MCMC 
simulations on one Windows 2000 TS box then you may well have 
'performance issues'!

  Perhaps the TS service isn't intended for people to do real computer 
work on, but is just for Office apps. Then you come along and want your 
students to do serious number crunching. At that point the MS Word 
writers experience what we used to call 'lag'.

> Does anyone on the list have experience of a similar set-up? If you do,
> I could use that as part of my argument to invest some time in sorting
> these issues out. I really want to get the Windows version of R
> installed for teaching because at the moment I subject my students to
> the rather hostile world of an archaic UNIX session to run R - for them
> at least.

  We have a couple of labs that are similar - we use Wyse Thin Client 
Xterminals which boot Thinstation Linux from a server and then connect 
to Windows 2003 TS machines using RDP or Ubuntu Linux boxes using XDMCP. 
We dont use Citrix ICA.

  'Performance issues' will depend very much on what you are doing. As a 
quick benchmark, last term we had 24 users in a lab all running Windows 
and running Matlab, Firefox, that kind of stuff. One dual 2.6GHz Xeon 
Dell with 4G Ram never went above 60% CPU usage. And we had another 
three similar Dells sitting idle waiting for installation. Sessions with 
R run regularly in these labs and we've never had 'performance issues'.

  So possibly your IT support are stalling. Do they regularly say "Have 
you tried switching it off and on again?" in response to a support query 
[1]?

Barry

[1] Catchphrase of the tech support guys in comedy series 'The IT Crowd'

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] by() and CrossTable()

2006-04-25 Thread Marc Schwartz (via MN)
On Tue, 2006-04-25 at 11:07 -0400, Chuck Cleland wrote:
>I am attempting to produce crosstabulations between two variables for 
> subgroups defined by a third factor variable.  I'm using by() and 
> CrossTable() in package gmodels.  I get the printing of the tables first 
> and then a printing of each level of the INDICES.  For example:
> 
> library(gmodels)
> 
> by(warpbreaks, warpbreaks$tension, function(x){CrossTable(x$wool, 
> x$breaks > 30, format="SPSS", fisher=TRUE)})
> 
>Is there a way to change this so that the CrossTable() output is 
> labeled by the levels of the INDICES variable?  I think this has to do 
> with how CrossTable returns output, because the following does what I want:
> 
> by(warpbreaks, warpbreaks$tension, function(x){summary(lm(breaks ~ wool, 
> data = x))})
> 
> thanks,
> 
> Chuck

Chuck,

Thanks for your e-mail.

Without digging deeper, I suspect that the problem here is that
CrossTable() has embedded formatted output within the body of the
function using cat(), as opposed to a two step process of creating a
results object, which then has a print method associated with it. This
would be the case in the lm() example that you have as well as many
other functions in R.

I had not anticipated this particular use of CrossTable(), since it was
really focused on creating nicely formatted 2d tables using fixed width
fonts.

That being said, I have had recent requests to enhance CrossTable()'s
functionality to:

1. Be able to assign the results of the internal processing to an object
and be able to assign that object without any other output. For example:

  Results <- CrossTable(...)

yielding no further output in the console.


2. Facilitate LaTeX markup of the CrossTable() formatted output for
inclusion in LaTeX documents.


Both of the above would require me to fundamentally alter CrossTable()
to create a "CrossTable" class object, as opposed to the current
embedded output. I would then create a print.CrossTable() method
yielding the current output, as well as one to create LaTeX markup for
that application. The LaTeX output would likely need to support the
regular 'table' style as well as 'ctable' and 'longtable' styles, the
latter given the potential for long multi-page output.

These changes should then support the type of use that you are
attempting here.

These are on my TODO list for CrossTable() (along with the inclusion of
the measures of association recently discussed) and now that the dust
has settled from some recent abstract submission deadlines I can get
back to some of these things. I don't have a timeline yet, but will
forge ahead with these enhancements.

One possible suggestion for you as an interim, at least in terms of some
nicely formatted n-way tables is the ctab() function in the 'catspec'
package by John Hendrickx.

A possible example call would be:

ctab(warpbreaks$tension, warpbreaks$wool, warpbreaks$breaks > 30, 
 type = c("n", "row", "column", "total"), addmargins = TRUE)


Unlike CrossTable() which is strictly 2d (though that may change in the
future), ctab() directly supports the creation of n-way tables, with
counts and percentages/proportions interleaved in the output. There are
no statistical tests applied and these would need to be done separately
using by().


Chuck, feel free to contact me offlist as other related issues may arise
or as you have other comments on this.

Again, thanks for the e-mail.

Best regards,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Running R on Windows 2000 Terminal Services

2006-04-25 Thread Gavin Simpson
Dear list,

My employer uses a Windows 2000 Terminal Server-based system for its
college-wide managed computer service - computers connect directly to
the WTS servers for their sessions, using a Citrix ICA client. When I
asked them to install R (Windows) on an older version of this service
the IT guys installed it but pulled it for performance issues. I am
trying to get them to try again but receiving little encouragement from
them.

Does anyone on the list have experience of a similar set-up? If you do,
I could use that as part of my argument to invest some time in sorting
these issues out. I really want to get the Windows version of R
installed for teaching because at the moment I subject my students to
the rather hostile world of an archaic UNIX session to run R - for them
at least.

Thanks in advance,

Gav

-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
*  Note new Address, Telephone & Fax numbers from 6th April 2006  *
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson 
ECRC & ENSIS  [t] +44 (0)20 7679 0522
UCL Department of Geography   [f] +44 (0)20 7679 0565
Pearson Building  [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street  [w] http://www.ucl.ac.uk/~ucfagls/cv/
London, UK.   [w] http://www.ucl.ac.uk/~ucfagls/
WC1E 6BT.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] summary.lme: argument "adjustSigma"

2006-04-25 Thread Christoph Buser
Dear R-list

I have a question concerning the argument "adjustSigma" in the
function "lme" of the package "nlme". 

The help page says:

"the residual standard error is multiplied by sqrt(nobs/(nobs - 
npar)), converting it to a REML-like estimate."

Having a look into the code I found:

stdFixed <- sqrt(diag(as.matrix(object$varFix)))

if (object$method == "ML" && adjustSigma == TRUE) {
stdFixed <- sqrt(object$dims$N/(object$dims$N - length(stdFixed))) * 
stdFixed
}

tTable <- data.frame(fixed, stdFixed, object$fixDF[["X"]], 
fixed/stdFixed, fixed)


To my understanding, only the standard error for the fixed
coefficients is adapted and not the residual standard error. 

Therefore only the tTable of the output is affected by the
argument "adjustSigma", but not the estimate for residual
standard error (see the artificial example below). 

May someone explain to me if there is an error in my
understanding of the help page and the R code? 
Thank you very much.  

Best regards,

Christoph Buser

--
Christoph Buser <[EMAIL PROTECTED]>
Seminar fuer Statistik, LEO C13
ETH Zurich  8092 Zurich  SWITZERLAND
phone: x-41-44-632-4673 fax: 632-1228
http://stat.ethz.ch/~buser/
--


Example
---

set.seed(1)
dat <- data.frame(y = rnorm(16), fac1 = rep(1:4, each = 4),
  fac2 = rep(1:2,each = 8))

telme <- lme(y ~ fac1, data = dat, random = ~ 1 | fac2, method = "ML")
summary(telme)
summary(telme, adjustSigma = FALSE)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Kullback Liebler

2006-04-25 Thread Gaye Hattem

On 23 Apr 2006 Tolga Uzner wrote:

> Does anyone have an implementation of KL apart from what is in reldist
> ? Something with flexibility to take in empirical data for two sets,
> or their CDF or their density, flagged separately for each ?

There's a function in the package flexmix called KLdiv for finding the 
Kullback-Leibler 
Divergence that takes a matrix of the density values of the distributions 
as input.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nlminb( ) : one compartment open PK model

2006-04-25 Thread Spencer Graves
  I've had pretty good luck solving problems like this when the post 
included a simple, reproducible example, as suggested in the posting 
guide (www.R-project.org/posting-guide.html).  Without that, I'm close 
to clueless.  I've encountered this kind of error myself before, and I 
think the way I solved it was just simplifying the example in steps 
until the problem went away.  By doing this, I could isolate the source 
of the problem.

  Have you tried "trace=1", as described in the nlminb help page?  With 
this, nlminb prints "The value of the objective function and the 
parameters [at] every trace'th iteration."  Also, have you worked all 
the examples in the nlminb help page?  If that didn't show me how to 
solve my problem, I think I'd next try having nlminb estimate the 
gradient and hessian, rather than supplying them myself.

  hope this helps,
  spencer graves

Greg Tarpinian wrote:

> All,
> 
> I have been able to successfully use the optim( ) function with
> "L-BFGS-B" to find reasonable parameters for a one-compartment
> open pharmacokinetic model.  My loss function in this case was
> squared error, and I made no assumptions about the distribution
> of the plasma values.  The model appeared to fit pretty well.
> 
> Out of curiosity, I decided to try to use nlminb( ) applied to
> a likelihood function that assumes the plasma values are normally
> distributed on the semilog scale (ie, modeling log(conc) over
> time).  nlminb( ) keeps telling me that it has converged, but 
> the estimated parameters are always identical to the initial
> values  I am certain that I have committed "ein dummheit"
> somewhere in the following code, but not sure what...  Any help
> would be greatly appreciated.
> 
> Kind regards,
> 
> Greg
> 
> 
> 
> model2 <- function(parms, dose, time, log.conc)
> {
>   exp1 <- exp(-parms[1]*time)
>   exp2 <- exp(-parms[2]*time)
>   right.hand <- log(exp1 - exp2)
>   numerator <- dose*parms[1]*parms[2]
>   denominator <- parms[3]*(parms[2] - parms[1])
>   left.hand <- log(numerator/(denominator))
>   pred <- left.hand + right.hand
>   
>   # defining the distribution of the values
>   const <- 1/(sqrt(2*pi)*parms[4])
>   exponent <- (-1/(2*(parms[4]^2)))*(log.conc - pred)^2
>   likelihood <- const*exp(exponent)
>   
>   #defining the merit function
>   -sum(log(likelihood))
> }
> 
> deriv2
> <- deriv( expr = ~   -log(1/(sqrt(2*pi)*S)*exp((-1/(2*(S^2)))*
>   (log.conc-(log(dose*Ke*Ka/(Cl*(Ka-Ke)))
>   +log(exp(-Ke*time)-exp(-Ka*time^2)),
> namevec = c("Ke","Ka","Cl","S"),
> function.arg = function(Ke, Ka, Cl, S, dose, time, log.conc) NULL )
>   
> gradient2.1compart <- function(parms, dose, time, log.conc)
> {
> Ke <- parms[1]; Ka <- parms[2]; Cl <- parms[3]; S <- parms[4]
> colSums(attr(deriv2.1compart(Ke, Ka, Cl, S, dose, time, log.conc), 
> "gradient"))
> }
> 
> attach(foo.frame)
> inits <- c(Ke = .5,
>  Ka = .5, 
>  Cl = 1,
>  S = 1)
> 
> #Trying out the code
> nlminb(start = inits, 
>  objective = model2,
>  gradient = gradient2,
>  control = list(eval.max = 5000, iter.max = 5000),
>  lower = rep(0,4),
>  dose = DOSE,
>  time = TIME,
>  log.conc = log(RESPONSE))
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nls and factor

2006-04-25 Thread Manuel Gutierrez
Thanks, it was actually p.249, at least in my MASS3.
but that solved my doubt.

I've have another doubt, can this factor interact with
one of the parameters in the model?

My problem is basically a Michaelis Menten term, where
this factor determines a different Km. The rest of the
parameters in the model are the same. But I don't know
how to write the nls formula, or if it is possible.

This is a toy example, B and A are the two "factors",
conc is the concentration and t is the temperature:

## Generate independent variables
Bconc<-runif(30,0.1,10)
Aconc<-runif(30,0.1,10)
At<-runif(30,1,30)
Bt<-runif(30,1,30)


## These are the parameters I want to calculate from
## my real data 
BKm<-1
AKm<-0
EBoth<--0.41

# These are my simulated dependent variables
yB<-100*exp(EBoth*Bt)*Bconc/(BKm+Bconc)+rnorm(30,0,1)
yA<-75*exp(EBoth*At)*Aconc/(AKm+Aconc)+rnorm(30,0,1)

#The separate models
BModel<-nls(Response~lev*exp(Ev*t)*conc/(Km+conc),data=list(Response=yB,t=Bt,conc=Bconc),start=list(lev=90,Ev=-0.5,Km=0.8),trace=TRUE)

AModel<-nls(Response~lev*exp(Ev*t)*conc/(Km+conc),data=list(Response=yA,t=At,conc=Aconc),start=list(lev=90,Ev=-0.5,Km=0.8),trace=TRUE)

## I want to obtain a combined model of the form:
## Y=Intercept[1:2]*exp(Eboth*t)*conc/(Km[1:2]+conc)
## where I have a common E but two intercepts and two
## Kms (one of them should in fact be zero)

yBoth<-c(yB,yA)
concBoth<-c(Bconc,Aconc)
tBoth<-c(At,Bt)
AorB<-as.factor(c(rep(0,length(yA)),rep(1,length(yB

## Amongst other things I've tried 
FullModel<-nls(Response~lev[AorB]*exp(Ev*t)*conc/(Km[AorB]+conc),data=list(Response=yBoth,t=tBoth,conc=concBoth),start=list(lev=c(90,70),Ev=-0.5,Km=c(0.8,0)),trace=TRUE)

## but i get to a singular gradient

Any other pointers,
thanks
Manuel

 --- Prof Brian Ripley <[EMAIL PROTECTED]>
escribió:

> On Thu, 20 Apr 2006, Manuel Gutierrez wrote:
> 
> > Is it possible to include a factor in an nls
> formula?
> 
> Yes.  What do you intend by it?  If you mean what it
> would mean for a lm 
> formula, you need A[a] and starting values for A.
> 
> There's an example on p.219 of MASS4.
> 
> > I've searched the help pages without any luck so I
> > guess it is not feasible.
> > I've given it a few attempts without luck getting
> the
> > message:
> > + not meaningful for factors in:
> > Ops.factor(independ^EE, a)
> >
> > This is a toy example, my realworld case is much
> more
> > complicated (and can not be solved linearizing an
> > using lm)
> > a<-as.factor(c(rep(1,50),rep(0,50)))
> > independ<-rnorm(100)
> > respo<-rep(NA,100)
> > respo[a==1]<-(independ[a==1]^2.3)+2
> > respo[a==0]<-(independ[a==0]^2.1)+3
> >
>
nls(respo~independ^EE+a,start=list(EE=1.8),trace=TRUE)
> >
> > Any pointers welcomed
> > Many Thanks,
> > Manu
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
> >
> 
> -- 
> Brian D. Ripley, 
> [EMAIL PROTECTED]
> Professor of Applied Statistics, 
> http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865
> 272861 (self)
> 1 South Parks Road, +44 1865
> 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865
> 272595
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] by() and CrossTable()

2006-04-25 Thread Chuck Cleland
   I am attempting to produce crosstabulations between two variables for 
subgroups defined by a third factor variable.  I'm using by() and 
CrossTable() in package gmodels.  I get the printing of the tables first 
and then a printing of each level of the INDICES.  For example:

library(gmodels)

by(warpbreaks, warpbreaks$tension, function(x){CrossTable(x$wool, 
x$breaks > 30, format="SPSS", fisher=TRUE)})

   Is there a way to change this so that the CrossTable() output is 
labeled by the levels of the INDICES variable?  I think this has to do 
with how CrossTable returns output, because the following does what I want:

by(warpbreaks, warpbreaks$tension, function(x){summary(lm(breaks ~ wool, 
data = x))})

thanks,

Chuck

-- 
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 512-0171 (M, W, F)
fax: (917) 438-0894

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] lme: how to compare random effects in two subsets of data

2006-04-25 Thread Laurent Fanchon
Dear R-gurus,

I have an interpretation problem regarding lme models.

I am currently working on dog locomotion, particularly on some variation 
factors.
I try to figure out which limb out of 2 generated more dispersed data.

I record a value called Peak, around 20 times for each limb with a record.
I repeat the records during a single day, and on several days.

I tried to build two models, one for each limb :
Dog.Left <- lme (fixed=Peak~1, data=Loco, 
subset=Limb=="Left",random=~1|Dog/Day/Record)
Dog.Right <- lme (fixed=Peak~1, data=Loco, 
subset=Limb=="Right",random=~1|Dog/Day/Record)

This allows to determine the variance attributable to each factor.
Record represents the within-day variation, Day represents the 
between-day variation.

This gives the following results :
VarCorr (Dog.Left)
Variance StdDev  
Dog = pdLogChol(1) 
(Intercept) 564.5558723.760384
Day =pdLogChol(1) 
(Intercept)  54.63027 7.391229
Record =pdLogChol(1) 
(Intercept)  23.29377 4.826362
Residual 27.46464 5.240672

VarCorr(Dog.Right)
Variance StdDev  
Dog = pdLogChol(1) 
(Intercept) 552.1124623.497074
Day =pdLogChol(1) 
(Intercept)  70.72088 8.409571
Record =pdLogChol(1) 
(Intercept)  21.94594 4.684649
Residual 29.68476 5.448373

This shows that the variance might be different for each limb.
For example, the variance attributable to Day might be higher for the 
Right limb.

This is the first part of my interpretation, and I hope to be right. 
What do you think??

Then, the question is : are these differences statistically significant.
I am not sure of how to investigate this question.

I tried to compare several models :
model1<- lme (fixed=Peak~Limb, data=Loco, 
random=list(Dog=~Limb,Day=~Limb,Record=~Limb)) this is the more 
complicated model
model2<-lme (fixed=Peak~Limb, data=Loco, 
random=list(Dog=~Limb,Day=~Limb,Record=~1))
anova (model1,model2) showed no difference
model3<-lme (fixed=Peak~Limb, data=Loco, 
random=list(Dog=~Limb,Day=~1,Record=~1))
anova (model2,model3) showed a significant difference <0.0001

model2 seems to be the best model.
Does it means the difference of variance between the two limb is 
significant for between-day variation and is unsignificant for 
within-day variation??

Finally VarCorr (model2) gives :
Variance StdDevCorr 
Dog = pdLogChol(Limb)
(Intercept) 567.553021   23.823371 (Intr)
LimbRight 7.2490642.692409 -0.166
Day =pdLogChol(Limb)
(Intercept)  53.8883467.340868 (Intr)
LimbRight 4.8633942.205310 0.363
Record =pdLogChol(1)
(Intercept)  22.4180314.734768  
Residual 28.7404515.361012   

I am not sure to understand this issue.
The global variance attributable to Day is roughly 53.88 (random effect 
on the intercept). And the differences between the two limbs might be 
increased according to a variance of 4.86 (random effect on the slope).
Is that right?
But does this also make it possible to determine which limb had the 
highest variance? I guess if I change the order of the Limb factor 
(Righthttps://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Handling large dataset & dataframe

2006-04-25 Thread Sachin J
Mark:
   
  Here is the information I didn't provide in my earlier post. R version is 
R2.2.1 running on Windows XP.  My dataset has 16 variables with following data 
type.
  ColNumber:   1  2  3  ...16
  Datatypes:

"numeric","numeric","numeric","numeric","numeric","numeric","character","numeric","numeric","character","character","numeric","numeric","numeric","numeric","numeric","numeric","numeric"
   
  Variable (2) which is numeric and variables denoted as character are to be 
treated as dummy variables in the regression. 
   
  Search in R help list  suggested I can use read.csv with colClasses option 
also instead of using scan() and then converting it to dataframe as you 
suggested. I am trying both these methods but unable to resolve syntactical 
error. 
   
  >coltypes<- 
c("numeric","factor","numeric","numeric","numeric","numeric","factor","numeric","numeric","factor","factor","numeric","numeric","numeric","numeric","numeric","numeric","numeric")
   
  >mydf <- read.csv("C:/temp/data.csv", header=FALSE, colClasses = coltypes, 
strip.white=TRUE)
   
  ERROR: Error in scan(file = file, what = what, sep = sep, quote = quote, dec 
= dec,  : 
scan() expected 'a real', got 'V1'
   
  No idea whats the problem.
   
  AS PER YOUR SUGGESTION I TRIED scan() as follows:
   
  
>coltypes<-c("numeric","factor","numeric","numeric","numeric","numeric","factor","numeric","numeric","factor","factor","numeric","numeric","numeric","numeric","numeric","numeric","numeric")
  >x<-scan(file = 
"C:/temp/data.dbf",what=as.list(coltypes),sep=",",quiet=TRUE,skip=1) 
  >names(x)<-scan(file = "C:/temp/data.dbf",what="",nlines=1, sep=",") 
  >x<-as.data.frame(x) 
   
  This is working fine but x has no data in it and contains
  > x
   
   [1] X._.   NA.NA..1  NA..2  NA..3  NA..4  NA..5  NA..6  NA..7  NA..8  
NA..9  NA..10 NA..11
[14] NA..12 NA..13 NA..14 NA..15 NA..16
<0 rows> (or 0-length row.names)
   
  Please let me know how to properly use scan or colClasses option.
   
  Sachin

   
   
  

Mark Stephens <[EMAIL PROTECTED]> wrote:
  Sachin,
With your dummies stored as integer, the size of your object would appear
to be 35 * (4*250 + 8*16) bytes = 376MB.
You said "PC" but did not provide R version information, assuming windows
then ...
With 1GB RAM you should be able to load a 376MB object into memory. If you
can store the dummies as 'raw' then object size is only 126MB.
You don't say how you attempted to load the data. Assuming your input data
is in text file (or can be) have you tried scan()? Setup the 'what' argument
with length 266 and make sure the dummy column are set to integer() or
raw(). Then x = scan(...); class(x)=" data.frame".
What is the result of memory.limit()? If it is 256MB or 512MB, then try
starting R with --max-mem-size=800M (I forget the syntax exactly). Leave a
bit of room below 1GB. Once the object is in memory R may need to copy it
once, or a few times. You may need to close all other apps in memory, or
send them to swap.
I don't really see why your data should not fit into the memory you have.
Purchasing an extra 1GB may help. Knowing the object size calculation (as
above) should help you guage whether it is worth it.
Have you used process monitor to see the memory growing as R loads the
data? This can be useful.
If all the above fails, then consider 64-bit and purchasing as much memory
as you can afford. R can use over 64GB RAM+ on 64bit machines. Maybe you can
hire some time on a 64-bit server farm - i heard its quite cheap but never
tried it myself. You shouldn't need to go that far with this data set
though.
Hope this helps,
Mark


Hi Roger,

I want to carry out regression analysis on this dataset. So I believe I
can't read the dataset in chunks. Any other solution?

TIA
Sachin


roger koenker < [EMAIL PROTECTED]> wrote:
You can read chunks of it at a time and store it in sparse matrix
form using the packages SparseM or Matrix, but then you need
to think about what you want to do with it least squares sorts
of things are ok, but other options are somewhat limited...


url: www.econ.uiuc.edu/~roger Roger Koenker
email [EMAIL PROTECTED] Department of Economics
vox: 217-333-4558 University of Illinois
fax: 217-244-6678 Champaign, IL 61820


On Apr 24, 2006, at 12:41 PM, Sachin J wrote:

> Hi,
>
> I have a dataset consisting of 350,000 rows and 266 columns. Out
> of 266 columns 250 are dummy variable columns. I am trying to read
> this data set into R dataframe object but unable to do it due to
> memory size limitations (object size created is too large to handle
> in R). Is there a way to handle such a large dataset in R.
>
> My PC has 1GB of RAM, and 55 GB harddisk space running windows XP.
>
> Any pointers would be of great help.
>
> TIA
> Sachin
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting g

Re: [R] Help needed

2006-04-25 Thread Richard M. Heiberger
> x <- rnorm(100)
> xx <- cut(x,3)
> levels(xx)
[1] "(-2.37,-0.716]" "(-0.716,0.933]" "(0.933,2.58]"  
> as.numeric(xx)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Overlapping assignment

2006-04-25 Thread Rob Steele
Prof Brian Ripley wrote:
> On Mon, 24 Apr 2006, Rob Steele wrote:
> 
>> Is it valid to assign a slice of an array or vector to an overlapping
>> slice of the same object?  R seems to do the right thing but I can't
>> find anything where it promises to.
> 
> Yes.  Remember R is a vector language, so it is not doing this 
> index-by-index.  Also, what you are doing is
> 
> a <- `[<-`(a, 4:12, 1:9)
> 
> which should make this easier to understand: you get a new object, and you 
> don't expect
> 
> x <- x^2
> 
> to be a problem, do you?
> 
>>> a <- 1:12
>>> a[4:12] <- a[1:9]
>>> a
>>  [1] 1 2 3 1 2 3 4 5 6 7 8 9
>>
>>> b <- 1:12
>>> b[1:9] <- b[4:12]
>>> b
>>  [1]  4  5  6  7  8  9 10 11 12 10 11 12
>>

Thank you very much.  I understand that '[<-' is a function that 
produces a new "a" but I don't see that it necessarily follows that 
overlapping assignment is valid.  Even if R worked index-by-index (which 
at some level it must be doing) x <- x^2 would behave as expected.

It seems to me that R must either be smart about whether to copy left to 
right or right to left or else must copy the original object and then 
perform the destination indexing on the copy and the source indexing on 
the original.  Either way would keep it from overwriting a cell before 
it has a chance to copy it to its new location.  And if R does it by 
being smart about left and right, how does it handle index vectors that 
are out of order?  For example:

 > a <- 1:12
 > a[12:4] <-  a[9:1]
 > a
  [1] 1 2 3 1 2 3 4 5 6 7 8 9

It looks like it must be working with both the original and the copy 
during the assignment.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Help needed

2006-04-25 Thread Anamika Chaudhuri
Hi,
   
  Thanks but I had already fixed that part.
  My problem is I am not getting a value for the maxcls or mincls:
  here is the code:
   
  > dset1<-cbind(AGE,BMI,DEATH)
> BMIGRP<-cut(BMI,breaks=c(14,20,25,57),right=TRUE)
> AGEGRP<-floor(AGE/10)-2
> dset<-cbind(AGEGRP,BMIGRP,DEATH)
> maxage<-max(dset[,1])
> minage<-min(dset[,1])
> maxcls<-max(dset[,2])
> maxcls
[1] NA
   
  After performing the cut function i get the different categories which are 
labelled as (a,b] by default. I wanted to change them to categories 1 ,2 and 3 
respectively so that I get a value for maxcls and mincls.
   
  How do I change the 3 categories (14,20],(20,25],(25,57] to 1,2,3 
respectively?
   
  Thanks,
  Anamika
  

"Richard M. Heiberger" <[EMAIL PROTECTED]> wrote:
  The lines
> #maxcls<-dset[,2]
> #mincls<-dset[,2]
which you have shown commented out select a full column.
You probably want the min and max of that column.

With your definitions, mincls:maxlcs has the same type of behavior as
(1:3):(1:3)



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] [O/T] undergrads and R

2006-04-25 Thread John Fox
Dear Ales,

> -Original Message-
> From: Ales Ziberna [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, April 25, 2006 1:15 AM
> To: Richard M. Heiberger
> Cc: r-help@stat.math.ethz.ch; John Fox
> Subject: Re: [R] [O/T] undergrads and R
> 
> I will be also using R commander with undergrad students, who 
> actually already have some experience with R, however they 
> have been using it more as programing language than a 
> statistical tool and I want to get them more familiar with 
> this aspect of R.
> 
> First, when I had my first look at R commander, I really 
> liked it, however, when I did a little more experiments, I 
> found it quite frustrating at times, especially by the fact 
> that when computing a certain statistics (for example mean), 
> you can select only one variable at the time. Are there any 
> plans to change this, so that a number of variables could be 
> chosen? 

This was a deliberate choice, made to discourage students from computing
statistics without thinking about them, but perhaps it's wrongheaded. I'd be
interested to hear what people think. 

It would be very simple to change the Numerical Summaries dialog to permit
more than one variable to be selected (in fact you could do it and recompile
the package if you wished). Are you referring only to the "Statistics ->
Summaries -> Numerical summaries" dialog, or are there other places where
you'd like more than one variable to be selected? Would you like a check-box
to select all numeric variables?

More generally, I'm open to considering suggestions for improving the Rcmdr.

> I would also prefer to have the variables sorted the 
> same way as they are in the data frame and and not 
> alphabetically. 

See the sort.names Rcmdr option, described in ?Commander.

Regards,
 John

> If this two (what I believe minor) things 
> could be improved, I would find R commander much more usable.
> 
> Otherwise, I find it a grate package, especially when working 
> with social sciences students.
> 
> Does anyone else have similar filings.
> 
> Best regards,
> Ales Ziberna
> 
> Richard M. Heiberger pravi:
> > This semester for the first time I have been using the 
> combination of 
> > R, R Commander (John Fox's package providing a menu-driven 
> interface 
> > to R), and RExcel (Erich Neuwirth's package for interfacing R with 
> > Excel).  The audience is the introductory Statistics class for 
> > Business undergradutes.  The short summary is that I think the 
> > combination works well for this audience.
> > 
> > I will be talking on my experience at the useR! conference 
> in June.  I 
> > added several additional menu items to Rcmdr for our group.  I sent 
> > the January ones (prior to the beginning of the semester) 
> to John Fox in January.
> > I will send another batch of menu items, those constructed 
> during the 
> > semester, as soon as the semester is complete.
> > 
> > The goal is to hide most of the programming from the students.  But 
> > not all of it.  I think it is very important for any user of a menu 
> > system to have at least a rudimentary idea of the 
> programming steps behind the menu.
> > Rcmdr supports this goal since it functions by generating R 
> language 
> > statements from the menu selections and displaying the 
> generated statements.
> > For example, I will casually change the cex or ylim of a generated 
> > plot statement.  I post the script window (generated and edited 
> > statements) from each class to the course website.  I do 
> not post the output window.
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> >

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] regression modeling

2006-04-25 Thread bogdan romocea
There is an aspect, worthy of careful consideration, you don't seem to
be aware of. I'll ask the question for you: How does the
explanatory/predictive potential of a dataset vary as the dataset gets
larger and larger?


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> Sent: Monday, April 24, 2006 12:45 PM
> To: r-help
> Subject: [R] regression modeling
>
> Hi, there:
> I am looking for a regression modeling (like regression
> trees) approach for
> a large-scale industry dataset. Any suggestion on a package
> from R or from
> other sources which has a decent accuracy and scalability? Any
> recommendation from experience is highly appreciated.
>
> Thanks,
>
> Weiwei
>
> --
> Weiwei Shi, Ph.D
>
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Handling large dataset & dataframe

2006-04-25 Thread Mark Stephens
Sachin,
With your dummies stored as integer,  the size of your object would appear
to be 35 * (4*250 + 8*16) bytes =  376MB.
You said "PC" but did not provide R version information,  assuming windows
then ...
With 1GB RAM you should be able to load a 376MB object into memory.   If you
can store the dummies as 'raw' then object size is only 126MB.
You don't say how you attempted to load the data. Assuming your input data
is in text file (or can be) have you tried scan()? Setup the 'what' argument
with length 266 and make sure the dummy column are set to integer() or
raw().  Then   x = scan(...);  class(x)=" data.frame".
What is the result of memory.limit()?  If it is 256MB or 512MB, then try
starting R with --max-mem-size=800M  (I forget the syntax exactly). Leave a
bit of room below 1GB.  Once the object is in memory R may need to copy it
once, or a few times. You may need to close all other apps in memory,  or
send them to swap.
I don't really see why your data should not fit into the memory you have.
Purchasing an extra 1GB may help.  Knowing the object size calculation (as
above) should help you guage whether it is worth it.
Have you used process monitor to see the memory growing as R loads the
data?  This can be useful.
If all the above fails,  then consider 64-bit and purchasing as much memory
as you can afford. R can use over 64GB RAM+ on 64bit machines. Maybe you can
hire some time on a 64-bit server farm - i heard its quite cheap but never
tried it myself.  You shouldn't need to go that far with this data set
though.
Hope this helps,
Mark


Hi Roger,

 I want to carry out regression analysis on this dataset. So I believe I
can't read the dataset in chunks. Any other solution?

 TIA
 Sachin


roger koenker < [EMAIL PROTECTED]> wrote:
 You can read chunks of it at a time and store it in sparse matrix
form using the packages SparseM or Matrix, but then you need
to think about what you want to do with it least squares sorts
of things are ok, but other options are somewhat limited...


url: www.econ.uiuc.edu/~roger Roger Koenker
email [EMAIL PROTECTED] Department of Economics
vox: 217-333-4558 University of Illinois
fax: 217-244-6678 Champaign, IL 61820


On Apr 24, 2006, at 12:41 PM, Sachin J wrote:

> Hi,
>
> I have a dataset consisting of 350,000 rows and 266 columns. Out
> of 266 columns 250 are dummy variable columns. I am trying to read
> this data set into R dataframe object but unable to do it due to
> memory size limitations (object size created is too large to handle
> in R). Is there a way to handle such a large dataset in R.
>
> My PC has 1GB of RAM, and 55 GB harddisk space running windows XP.
>
> Any pointers would be of great help.
>
> TIA
> Sachin
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread Michael Dondrup
Hmm,

well, of course this is tested, and it produces a plot, but not the correct 
one ;) Sorry, that was too quick 


Am Tuesday 25 April 2006 12:51 schrieb Duncan Murdoch:
> On 4/25/2006 6:23 AM, Michael Dondrup wrote:
> > Hi,
> > of course it can't because the number of unique values is different. You
> > need a unique _combination_ of x,y and hopefully you don't have different
> > z values for any such a pair. try:
> >
> > xyz <- unique(cbind(x,y,z))
> > persp(xyz)
> >
> > If you still get the err, then you had different measurements for the
> > same point.
>
> I think you and Andreas want scatterplot3d (from a contributed package
> of the same name), not persp.  Persp takes very data in a very
> particular format.  See the man page.
>
> In general, it's a good idea to test your suggestions before posting
> them. Yours wouldn't work, because x, y and z *must not* be the same
> length in persp.
>
> Duncan Murdoch
>
> > Am Tuesday 25 April 2006 12:24 schrieb [EMAIL PROTECTED]:
> >> hi peter,
> >>
> >> thank you for your advice.
> >> ok, i see the problem, but if i do
> >>
> >> x<-unique(data$x)
> >> y<-unique(data$y)
> >> z<-matrix(unique(data$z),length(y),length(x))
> >>
> >> it also doesn't work.
> >>
> >> i want to do a plot, where i can see, how x and y influences z.
> >>
> >> P Ehlers wrote:
> >>> [EMAIL PROTECTED] wrote:
>  hello,
> 
>  i do the following in order to get an persp-plot
> 
>  x<-c(2,2,2,2,2,2,3,3,3,3)
>  y<-c(41,41,83,83,124,166,208,208,208,208)
>  z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
>  x<-data$x
>  y<-data$y
>  z<-matrix(data$z,length(y),length(x))
>  persp(x,y,z, col="gray")
> 
>  but i always get the error message increasing 'x' and 'y' values
>  expected, but i think my data values are already increasing, what is
>  wrong?
> >>>
> >>> I'm not sure what your data$x, data$y, data$z are (but I can guess).
> >>> Why do you think that your x is *increasing*? Is x[i+1] > x[i]?
> >>> Does diff(x) yield only positive values?
> >>>
> >>> What kind of a perspective plot do you expect? You seem to have only
> >>> 5 unique points.
> >>>
> >>> Peter Ehlers
> >>>
>  best regards
>  andreas
> >>
> >> __
> >> R-help@stat.math.ethz.ch mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide!
> >> http://www.R-project.org/posting-guide.html
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] the 'copula' package

2006-04-25 Thread Casey Quinn
Thank you! The problem turned out to be network-based, of all things (in 
terms of reading different directories - it had nothing to do with R 
itself, or the package), but this solves a secondary problem I came 
across, as well.

Casey.

jun yan wrote:
> Here is an example that works:
>
> > mycop <- tCopula(param=0.5, dim=8, dispstr="ex", df=5)
> > x <- rcopula(mycop, 1000)
> > myfit <- fitCopula(x, mycop, c(0.6, 10), optim.control=list(trace=1), 
> method="Nelder-Mead")
> > myfit
> The ML estimation is based on  1000  observations.
>Estimate Std. Error  z value Pr(>|z|)
> rho.1 0.4989052 0.01192036 41.853200
> df5.2976624 0.32442429 16.329430
> The maximized loglikelihood is  2038.907
> The convergence code is  0
>
> On 4/24/06, *Casey Quinn* <[EMAIL PROTECTED] > 
> wrote:
>
> Is anybody using the Copula package in R? The particular problem I'm
> facing is that R is not acknowledging the fitCopula command/function
> when I load the package and (try to) run something very simple:
>
> fit1 <- fitCopula(x1 = list(u11,u12,u13,u14,u15,u16,u17,u18), tCopula,
> optim.control = list(NULL), method = "BFGS")
>
> Anybody also using it, successfully or unsuccessfully? I'd
> appreciate a
> tip or two.
>
> Casey Quinn
> Centre for Health Economics
> University of York
> York YO10 5DD
> England
>
> Phone: +44 01904 32 1411
> Fax:+44 01904 32 1402
> Email:   [EMAIL PROTECTED] 
> Web:   http://www.york.ac.uk/inst/che/staff/quinn.htm
> 
>
> __
> R-help@stat.math.ethz.ch  mailing
> list
> https://stat.ethz.ch/mailman/listinfo/r-help
> 
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R help

2006-04-25 Thread Gabor Grothendieck
A similar question was just asked. See:

http://tolstoy.newcastle.edu.au/R/help/06/04/25898.html

On 4/25/06, Erez <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm working with large matrix data and i would like to know if
> there is any way to reduce the size of it because even that I'm
> increasing the memory limit and that i have 1 gb memory the
> program throwing me out.
> There is any way to use a smaller size data (such as using bits or so)
> to reduce the size of it.
>
> Erez
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Examples of "Svyrecvar" on Survey Package

2006-04-25 Thread Carlos Creva Singano \(M2004078\)
I have drouble on how to use svyrecvar for total of units in multi-stage 
sampling.
I ask for one example in how to use "svyrecvar" no Survey Package)

 

Carlos


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Problem with the cluster package

2006-04-25 Thread Martin Maechler
> "Rouyer" == Rouyer Tristan <[EMAIL PROTECTED]>
> on Mon, 24 Apr 2006 16:29:27 +0200 writes:

Rouyer> Hi everybody, I want to use the cluster package
Rouyer> (Cluster Analysis Extended Rousseeuw et al.). I
Rouyer> downloaded it from the CRAN and installed it on my
Rouyer> linux system (fedora core 4). 

Why that ???
As 'Recommended package',  it is part of R, so you already it
with your version of R [which you didn't tell us].

If possible, get R 2.3.0 (released yesterday) and you get the
latest version of 'cluster' working well.

Alternatively, in an older version of R,
use  
 update.packages()
or   install.packages("cluster")

Rouyer> All seemed to be allright.  But when trying to
Rouyer> launch examples, I obtained the following message :

>> library(cluster) data(votes.repub) agn1 <-
>> agnes(votes.repub, metric = "manhattan", stand = TRUE)
Rouyer> Error in .Fortran("twins", as.integer(n),
Rouyer> as.integer(jp), x2, dv, dis = double(if (keep.diss)
Rouyer> length(dv) else 1), : Fortran entry point "twins_"
Rouyer> not in DLL for package "cluster"

Rouyer> When installing the package, I saw that gfortran
Rouyer> compiler was used. And in the manuel pages it is
Rouyer> specified that gfortran has problems with entry,
Rouyer> namelist,...

Rouyer> Is my problem related to the fortran compiler ?

probably.

Rouyer> Shall I to use another fortran compiler ? If so,
Rouyer> which one ?  Does anybody have encountered the same
Rouyer> problem ?

These questions now belong to R-devel, not R-help, and
definitely need more details about what you have and what
exactly you did, etc.

Martin Maechler, ETH Zurich

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Add qoutation marks and combine values in a vector

2006-04-25 Thread Gabor Grothendieck
Use dQuote.  Assuming you have a data frame with the column
as factors:

DF <- data.frame(x = letters)  # test data
levels(DF$x) <- dQuote(levels(DF$x))


On 4/25/06, Jerry Pressnell <[EMAIL PROTECTED]> wrote:
> I wish to place quotation marks around each element of the following
> list;
>
> X1
> 1  Label 1
> 2  Label 2
> 3  Label 3
> 4  Label 4
>
> and combine the values in the following format for use in another
> function;
>
> c("Label 1","Label 2","Label 3","Label 4")
>
> Many thanks,
>
> Jerry
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R help

2006-04-25 Thread Doran, Harold
Ezra

I don't know what the elements of your matrix are, but if there are a
large proportion of 0s you can work with sparse matrices in the Matrix
package.

Harold
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erez
Sent: Tuesday, April 25, 2006 8:39 AM
To: r-help@stat.math.ethz.ch
Subject: [R] R help

Hello,

I'm working with large matrix data and i would like to know if there is
any way to reduce the size of it because even that I'm increasing the
memory limit and that i have 1 gb memory the program throwing me out.
There is any way to use a smaller size data (such as using bits or so)
to reduce the size of it.

Erez

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R help

2006-04-25 Thread Erez
Hello,

I'm working with large matrix data and i would like to know if
there is any way to reduce the size of it because even that I'm
increasing the memory limit and that i have 1 gb memory the
program throwing me out.
There is any way to use a smaller size data (such as using bits or so)
to reduce the size of it.

Erez

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread Duncan Murdoch
On 4/25/2006 6:23 AM, Michael Dondrup wrote:
> Hi, 
> of course it can't because the number of unique values is different. You need 
> a unique _combination_ of x,y and hopefully you don't have different z values 
> for any such a pair. try:
> 
> xyz <- unique(cbind(x,y,z))
> persp(xyz)
> 
> If you still get the err, then you had different measurements for the same 
> point.

I think you and Andreas want scatterplot3d (from a contributed package 
of the same name), not persp.  Persp takes very data in a very 
particular format.  See the man page.

In general, it's a good idea to test your suggestions before posting 
them. Yours wouldn't work, because x, y and z *must not* be the same 
length in persp.

Duncan Murdoch

> 
> Am Tuesday 25 April 2006 12:24 schrieb [EMAIL PROTECTED]:
>> hi peter,
>>
>> thank you for your advice.
>> ok, i see the problem, but if i do
>>
>> x<-unique(data$x)
>> y<-unique(data$y)
>> z<-matrix(unique(data$z),length(y),length(x))
>>
>> it also doesn't work.
>>
>> i want to do a plot, where i can see, how x and y influences z.
>>
>> P Ehlers wrote:
>>> [EMAIL PROTECTED] wrote:
 hello,

 i do the following in order to get an persp-plot

 x<-c(2,2,2,2,2,2,3,3,3,3)
 y<-c(41,41,83,83,124,166,208,208,208,208)
 z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
 x<-data$x
 y<-data$y
 z<-matrix(data$z,length(y),length(x))
 persp(x,y,z, col="gray")

 but i always get the error message increasing 'x' and 'y' values
 expected, but i think my data values are already increasing, what is
 wrong?
>>> I'm not sure what your data$x, data$y, data$z are (but I can guess).
>>> Why do you think that your x is *increasing*? Is x[i+1] > x[i]?
>>> Does diff(x) yield only positive values?
>>>
>>> What kind of a perspective plot do you expect? You seem to have only
>>> 5 unique points.
>>>
>>> Peter Ehlers
>>>
 best regards
 andreas
>> __
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide!
>> http://www.R-project.org/posting-guide.html
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread Michael Dondrup
Hi, 
of course it can't because the number of unique values is different. You need 
a unique _combination_ of x,y and hopefully you don't have different z values 
for any such a pair. try:

xyz <- unique(cbind(x,y,z))
persp(xyz)

If you still get the err, then you had different measurements for the same 
point.
Cheers

Am Tuesday 25 April 2006 12:24 schrieb [EMAIL PROTECTED]:
> hi peter,
>
> thank you for your advice.
> ok, i see the problem, but if i do
>
> x<-unique(data$x)
> y<-unique(data$y)
> z<-matrix(unique(data$z),length(y),length(x))
>
> it also doesn't work.
>
> i want to do a plot, where i can see, how x and y influences z.
>
> P Ehlers wrote:
> > [EMAIL PROTECTED] wrote:
> >> hello,
> >>
> >> i do the following in order to get an persp-plot
> >>
> >> x<-c(2,2,2,2,2,2,3,3,3,3)
> >> y<-c(41,41,83,83,124,166,208,208,208,208)
> >> z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
> >> x<-data$x
> >> y<-data$y
> >> z<-matrix(data$z,length(y),length(x))
> >> persp(x,y,z, col="gray")
> >>
> >> but i always get the error message increasing 'x' and 'y' values
> >> expected, but i think my data values are already increasing, what is
> >> wrong?
> >
> > I'm not sure what your data$x, data$y, data$z are (but I can guess).
> > Why do you think that your x is *increasing*? Is x[i+1] > x[i]?
> > Does diff(x) yield only positive values?
> >
> > What kind of a perspective plot do you expect? You seem to have only
> > 5 unique points.
> >
> > Peter Ehlers
> >
> >> best regards
> >> andreas
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread Michael Dondrup
Hi, 
of course it can't because the number of unique values is different. You need 
a unique _combination_ of x,y and hopefully you don't have different z values 
for any such a pair. try:

xyz <- unique(cbind(x,y,z))
persp(xyz)

If you still get the err, then you had different measurements for the same 
point.
Cheers

Am Tuesday 25 April 2006 12:24 schrieb [EMAIL PROTECTED]:
> hi peter,
>
> thank you for your advice.
> ok, i see the problem, but if i do
>
> x<-unique(data$x)
> y<-unique(data$y)
> z<-matrix(unique(data$z),length(y),length(x))
>
> it also doesn't work.
>
> i want to do a plot, where i can see, how x and y influences z.
>
> P Ehlers wrote:
> > [EMAIL PROTECTED] wrote:
> >> hello,
> >>
> >> i do the following in order to get an persp-plot
> >>
> >> x<-c(2,2,2,2,2,2,3,3,3,3)
> >> y<-c(41,41,83,83,124,166,208,208,208,208)
> >> z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
> >> x<-data$x
> >> y<-data$y
> >> z<-matrix(data$z,length(y),length(x))
> >> persp(x,y,z, col="gray")
> >>
> >> but i always get the error message increasing 'x' and 'y' values
> >> expected, but i think my data values are already increasing, what is
> >> wrong?
> >
> > I'm not sure what your data$x, data$y, data$z are (but I can guess).
> > Why do you think that your x is *increasing*? Is x[i+1] > x[i]?
> > Does diff(x) yield only positive values?
> >
> > What kind of a perspective plot do you expect? You seem to have only
> > 5 unique points.
> >
> > Peter Ehlers
> >
> >> best regards
> >> andreas
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread voodooochild
hi peter,

thank you for your advice.
ok, i see the problem, but if i do

x<-unique(data$x)
y<-unique(data$y)
z<-matrix(unique(data$z),length(y),length(x))

it also doesn't work.

i want to do a plot, where i can see, how x and y influences z.

P Ehlers wrote:
> [EMAIL PROTECTED] wrote:
>> hello,
>>
>> i do the following in order to get an persp-plot
>>
>> x<-c(2,2,2,2,2,2,3,3,3,3)
>> y<-c(41,41,83,83,124,166,208,208,208,208)
>> z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
>> x<-data$x
>> y<-data$y
>> z<-matrix(data$z,length(y),length(x))
>> persp(x,y,z, col="gray")
>>
>> but i always get the error message increasing 'x' and 'y' values 
>> expected, but i think my data values are already increasing, what is 
>> wrong?
>
>
> I'm not sure what your data$x, data$y, data$z are (but I can guess).
> Why do you think that your x is *increasing*? Is x[i+1] > x[i]?
> Does diff(x) yield only positive values?
>
> What kind of a perspective plot do you expect? You seem to have only
> 5 unique points.
>
> Peter Ehlers
>
>
>>
>> best regards
>> andreas
>>
>
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread P Ehlers
[EMAIL PROTECTED] wrote:
> hello,
> 
> i do the following in order to get an persp-plot
> 
> x<-c(2,2,2,2,2,2,3,3,3,3)
> y<-c(41,41,83,83,124,166,208,208,208,208)
> z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
> x<-data$x
> y<-data$y
> z<-matrix(data$z,length(y),length(x))
> persp(x,y,z, col="gray")
> 
> but i always get the error message increasing 'x' and 'y' values 
> expected, but i think my data values are already increasing, what is wrong?


I'm not sure what your data$x, data$y, data$z are (but I can guess).
Why do you think that your x is *increasing*? Is x[i+1] > x[i]?
Does diff(x) yield only positive values?

What kind of a perspective plot do you expect? You seem to have only
5 unique points.

Peter Ehlers


> 
> best regards
> andreas
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread Michael Dondrup
Hi,
yes, your x and y are increasing but the (x,y) coordinates are not unique.
Looks like some measurements are redundant in your input.
Michael
> hello,
>
> i do the following in order to get an persp-plot
>
> x<-c(2,2,2,2,2,2,3,3,3,3)
> y<-c(41,41,83,83,124,166,208,208,208,208)
> z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
> x<-data$x
> y<-data$y
> z<-matrix(data$z,length(y),length(x))
> persp(x,y,z, col="gray")
>
> but i always get the error message increasing 'x' and 'y' values
> expected, but i think my data values are already increasing, what is wrong?
>
> best regards
> andreas
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] persp plot increasing 'x' and 'y' values expected

2006-04-25 Thread voodooochild
hello,

i do the following in order to get an persp-plot

x<-c(2,2,2,2,2,2,3,3,3,3)
y<-c(41,41,83,83,124,166,208,208,208,208)
z<-c(90366,90366,92240,92240,92240,96473,100995,100995,100995,100995)
x<-data$x
y<-data$y
z<-matrix(data$z,length(y),length(x))
persp(x,y,z, col="gray")

but i always get the error message increasing 'x' and 'y' values 
expected, but i think my data values are already increasing, what is wrong?

best regards
andreas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R-2.3.0 crashes

2006-04-25 Thread Robin Hankin
Hi

[MacOSX 10.4.6, emacs 22.0.50.1, ess 5.3.0, R-2.3.0,
gcc version 4.0.1 (Apple Computer, Inc. build 5247]


Using R-2.3.0 with ESS,  I get a repeatable crash trying to open an X11
window.  Cut-n-paste session follows:


 > X11("octopus:0.0")
Error: Couldn't find per display information

Process R exited abnormally with code 1 at Tue Apr 25 09:12:14 2006


what's going on here?
[I don't get this running R from a terminal]




--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] distribution of the product of two correlated normal

2006-04-25 Thread Peter Ruckdeschel
Yu, Xuesong schrieb:

> Many thanks to Peter for your quick and detailed response to my question.  
> I tried to run your codes, but seems like "u" is not defined for functions fp 
> and fm. what is u?
> I believe t=X1*X2
> 
> nen0 <- m2+c0*u ## for all u's used in integrate: never positive

no, this is not the problem; u is the local integration variable
in local functions f, fm, fp over which integrate() performs
integration;

it is rather the eps = eps default value passed in functions
f, fm, fp  which causes a "recursive default value reference" - problem;
change it as follows:

###
#code by P. Ruckdeschel, [EMAIL PROTECTED], rev. 04-25-06
###
#
#pdf of X1X2, X1~N(m1,s1^2), X2~N(m2,s2^2), corr(X1,X2)=rho, evaluated at t
#
#   eps is a very small number to catch errors in division by 0
###
#
dnnorm <- function(t, m1, m2, s1, s2, rho,  eps = .Machine$double.eps ^ 0.5){
a <- s1*sqrt(1-rho^2)
b <- s1*rho
c <- s2
### new:
f <- function(u, t = t, m1 = m1, m2 = m2, a0 = a, b0 = b, c0 = c,  eps0 = eps)
 # new (04-25-06): eps0 instead of eps as local variable to f
 {
  nen0 <- m2+c0*u
  #catch a division by 0
  nen <- ifelse(abs(nen0)>eps0, nen0, ifelse(nen0>0, nen0+eps0, nen0-eps0))
  dnorm(u)/a0/nen * dnorm( t/a0/nen -(m1+b0*u)/a0)
 }
-integrate(f, -Inf, -m2/c, t = t, m1 = m1, m2 = m2, a0 = a, b0 = b, c0 = 
c)$value+
 integrate(f, -m2/c,  Inf, t = t, m1 = m1, m2 = m2, a0 = a, b0 = b, c0 = 
c)$value
}

###
#
#cdf of X1X2, X1~N(m1,s1^2), X2~N(m2,s2^2), corr(X1,X2)=rho, evaluated at t
#
#   eps is a very small number to catch errors in division by 0
###
#
pnnorm <- function(t, m1, m2, s1, s2, rho,  eps = .Machine$double.eps ^ 0.5){
a <- s1*sqrt(1-rho^2)
b <- s1*rho
c <- s2
### new:
fp <- function(u, t = t, m1 = m1, m2 = m2, a0 = a, b0 = b, c0 = c,  eps0 = eps)
 # new (04-25-06): eps0 instead of eps as local variable to fp
 {nen0 <- m2+c0*u ## for all u's used in integrate: never negative
  #catch a division by 0
  nen  <- ifelse(nen0>eps0, nen0, nen0+eps0)
  dnorm(u) * pnorm( t/a0/nen- (m1+b0*u)/a0)
 }
### new:
fm <- function(u, t = t, m1 = m1, m2 = m2, a0 = a, b0 = b, c0 = c,  eps0 = eps)
 # new (04-25-06): eps0 instead of eps as local variable to fm
 {nen0 <- m2+c0*u ## for all u's used in integrate: never positive
  #catch a division by 0
  nen  <- ifelse(nen0< (-eps0), nen0, nen0-eps0)
  dnorm(u) * pnorm(-t/a0/nen+ (m1+b0*u)/a0)
 }
integrate(fm, -Inf, -m2/c, t = t, m1 = m1, m2 = m2, a0 = a, b0 = b, c0 = 
c)$value+
integrate(fp, -m2/c,  Inf, t = t, m1 = m1, m2 = m2, a0 = a, b0 = b, c0 = 
c)$value
}

##
For me this gives, e.g.:

> pnnorm(0.5,m1=2,m2=3,s1=2,s2=1.4,rho=0.8)
[1] 0.1891655
> dnnorm(0.5,m1=2,m2=3,s1=2,s2=1.4,rho=0.8)
[1] 0.07805282


Hth, Peter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html