Re: [R] How to permanently remove [Previously saved workspace restored]

2010-11-13 Thread Peter Langfelder
On Sat, Nov 13, 2010 at 10:33 PM, Stephen Liu  wrote:
> Win 7 64 bit
> R version 2.11.1 (2010-05-31)
>
>
> How to permanently remove;
> [Previously saved workspace restored]
>
>> rm (list = ls( ))
>
> On next start it still displays;
> .
> [Previously saved workspace restored]
>
>
> There is a file keeping the previous data on Linux
> .Rdata

To the best of my knowledge there's an .RData file on Windows as well.
Check your default directory (usually Documents but may be something
else - start R as usual and type getwd() before you do anything).
Remove the .RData file as well the file .Rhistory and you should be
good to go.

Peter

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to permanently remove [Previously saved workspace restored]

2010-11-13 Thread Joshua Wiley
On Sat, Nov 13, 2010 at 10:33 PM, Stephen Liu  wrote:
>
> Win 7 64 bit
> R version 2.11.1 (2010-05-31)
>
>
> How to permanently remove;
> [Previously saved workspace restored]
>
> > rm (list = ls( ))
>
> On next start it still displays;
> .
> [Previously saved workspace restored]
>
>
> There is a file keeping the previous data on Linux
> .Rdata
>
> How about on Windows?

Generally the same.  Just delete the file if you do not want it.
Also, if you clear your workspace and save and shutdown once, and then
never save your workspace when you quit, it will just load an empty
workspace generally (but in the event that you would like to save your
workspace once for some reason, you just have to save and quit and it
will be there when you get back).

HTH,

Josh

>
> TIA
>
> B.R.
> Stephen L
>
>
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


--
Joshua Wiley
Ph.D. Student, Health Psychology
University of California, Los Angeles
http://www.joshuawiley.com/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to permanently remove [Previously saved workspace restored]

2010-11-13 Thread Stephen Liu
Win 7 64 bit
R version 2.11.1 (2010-05-31)


How to permanently remove;
[Previously saved workspace restored]

> rm (list = ls( ))

On next start it still displays;
.
[Previously saved workspace restored]


There is a file keeping the previous data on Linux
.Rdata

How about on Windows?

TIA

B.R.
Stephen L




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Replicate Excel's LOGEST worksheet function in R

2010-11-13 Thread Jeff Newmiller
Most software for curve fitting uses linear fits in conjunction with some 
combination of logarithms of your original in order to obtain logarithmic, 
power or exponential curve fits. The nls approach is arguably more correct, but 
it will yield different results than "normal", and may be finicky with some 
data sets.

Anyway, I recommend you learn from David before criticizing his assistance.

cran.30.miller_2...@spamgourmet.com wrote:

>On Fri, Nov 12, 2010 at 5:28 PM, David Winsemius -
>dwinsem...@comcast.net
><+cran+miller_2555+c0e7477398.dwinsemius#comcast@spamgourmet.com>
>wrote:
>
>>
>> On Nov 12, 2010, at 5:07 PM, David Winsemius wrote:
>>
>>
>>> On Nov 12, 2010, at 4:22 PM, cran.30.miller_2...@spamgourmet.com
>wrote:
>>>
>>>  Hi -

   I have a dataframe of (x,y) values. I'd like to fit an
>exponential
 curve to the data for further statistical analysis (pretty much the
>same
 functionality provided by Excel's LOGEST worksheet array function).
>Can
 someone point me to the (set of) functions/ package that is best
>suited
 to
 provide this functionality? Admittedly, I am a novice in the use of
>R
 statistical functions, so a brief example of how to compute a
>correlation
 coefficient off a fitted exponential curve would be greatly
>appreciated
 (though I could probably work through it over time if I knew the
>proper R
 tools).


>>> Probably (not seeing a clear description of the LOGEST function):
>>>
>>> ?exp
>>> ?log
>>> ?lm
>>> ?cor
>>>
>>>
>> I set up a OO.org Calc spreadsheet which has a lot of Excel
>work-alike
>> functions and does have a LOGEST. Giving an argument of x=1:26 and
>y=exp(x)
>> to the first two arguments of LOGEST, I get 1 and e. The OO.org help
>page
>> says
>> "FunctionType (optional). If Function_Type = 0, functions in the form
>y =
>> m^x will be calculated. Otherwise, y = b*m^x functions will be
>calculated."
>>
>> This might be the equivalent R operation:
>>
>> > x<-1:26
>> > y<-exp(x)
>> > lm(log(y) ~ x)
>>
>> Call:
>> lm(formula = log(y) ~ x)
>>
>> Coefficients:
>> (Intercept)x
>>  01
>>
>> > exp(coef(lm(log(y) ~ x)))
>> (Intercept)   x
>>   1.002.718282
>>
>> Note this is not a correlation coefficient but rather an
>(exponentiated)
>> regression coefficient.
>>
>> --
>> David Winsemius, MD
>> West Hartford, CT
>>
>>
>>
>Thanks, but I'm looking to fit an exponential curve (not a linear
>model).
>However, I was able to identify the `nls()` function that works well
>(adapted from John Fox's contribution "Nonlinear Regression and
>Nonlinear
>Least
>Squares"
>[Jan 2002] ref:
>http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-nonlinear-regression.pdf).
>For those interested, the following short script highlights my simple
>test
>case (though a little sloppy):
>
>mydf <- as.data.frame(cbind(1:6,rev(c(48., 24., 12.,
>6.,
>3., 1.5000)),rev(c( 51.4943, 12.4048, 12.9587, 3.7707, 2.4253,
>2.0400;
>colnames(mydf) <- c("X","Y","Y2");
>
>my.mod <- nls(Y2 ~ a*exp(b*X), data=mydf, start=list(a=3.00,b=2.00),
>trace=T)
>
>plot(mydf[,"X"],residuals(my.mod))
>plot(mydf[,"X"],mydf[,"Y2"], lwd=1)
>lines(mydf[,"X"],fitted.values(my.mod), lwd=2)
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

---
Jeff NewmillerThe .   .  Go Live...
DCN:Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
---
Sent from my phone. Please excuse my brevity.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] writing to a file

2010-11-13 Thread jim holtman
The HELP page for 'sink' is pretty clear about this:

sink() or sink(file=NULL) ends the last diversion (of the specified
type). There is a stack of diversions for normal output, so output
reverts to the previous diversion (if there was one). The stack is of
up to 21 connections (20 diversions).



On Sat, Nov 13, 2010 at 11:12 PM, Gregory Ryslik  wrote:
> Hi,
>
> I have a fairly complex object that I have written a print function for.
>
> Thus when I do print(results), the R console shows me a whole bunch of stuff 
> already formatted. What I want to do is to take whatever print(results) shows 
> to console and then put that in a file. I am doing this using the sink 
> command.
>
> However, I am unsure as to how to "unsink". Eg, how do I restore output to 
> the normal console?
>
> Thanks,
> Greg
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] writing to a file

2010-11-13 Thread Gregory Ryslik
Hi,

I have a fairly complex object that I have written a print function for.

Thus when I do print(results), the R console shows me a whole bunch of stuff 
already formatted. What I want to do is to take whatever print(results) shows 
to console and then put that in a file. I am doing this using the sink command.

However, I am unsure as to how to "unsink". Eg, how do I restore output to the 
normal console?

Thanks,
Greg
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Replicate Excel's LOGEST worksheet function in R

2010-11-13 Thread cran . 30 . miller_2555

On Nov 13, 2010, at 10:12 PM, cran.30.miller_2...@spamgourmet.com wrote:

> On Fri, Nov 12, 2010 at 5:28 PM, David Winsemius - 
> cran.30.miller_2...@spamgourmet.com 
>  <+cran 
> +miller_2555+c0e7477398.dwinsemius#comcast@spamgourmet.com> wrote:
>
> On Nov 12, 2010, at 5:07 PM, David Winsemius wrote:
>
>
> On Nov 12, 2010, at 4:22 PM, cran.30.miller_2...@spamgourmet.com  
> wrote:
>
> Hi -
>
>   I have a dataframe of (x,y) values. I'd like to fit an exponential
> curve to the data for further statistical analysis (pretty much the  
> same
> functionality provided by Excel's LOGEST worksheet array function).  
> Can
> someone point me to the (set of) functions/ package that is best  
> suited to
> provide this functionality? Admittedly, I am a novice in the use of R
> statistical functions, so a brief example of how to compute a  
> correlation
> coefficient off a fitted exponential curve would be greatly  
> appreciated
> (though I could probably work through it over time if I knew the  
> proper R
> tools).
>
>
> Probably (not seeing a clear description of the LOGEST function):
>
> ?exp
> ?log
> ?lm
> ?cor
>
>
> I set up a OO.org Calc spreadsheet which has a lot of Excel work- 
> alike functions and does have a LOGEST. Giving an argument of x=1:26  
> and y=exp(x) to the first two arguments of LOGEST, I get 1 and e.  
> The OO.org help page says
> "FunctionType (optional). If Function_Type = 0, functions in the  
> form y = m^x will be calculated. Otherwise, y = b*m^x functions will  
> be calculated."
>
> This might be the equivalent R operation:
>
> > x<-1:26
> > y<-exp(x)
> > lm(log(y) ~ x)
>
> Call:
> lm(formula = log(y) ~ x)
>
> Coefficients:
> (Intercept)x
>  01
>
> > exp(coef(lm(log(y) ~ x)))
> (Intercept)   x
>   1.002.718282
>
> Note this is not a correlation coefficient but rather an  
> (exponentiated) regression coefficient.
>
> -- 
> David Winsemius, MD
> West Hartford, CT
>
>
>
> Thanks, but I'm looking to fit an exponential curve (not a linear  
> model).


Then WHY did you ask for a function that would duplicate a particular  
Excel function? ... especially an Excel function which DOES a  
linear fit on the log of the Y argument?

http://support.microsoft.com/kb/828528

> However, I was able to identify the `nls()` function that works well  
> (adapted from John Fox's contribution "Nonlinear Regression and  
> Nonlinear Least Squares" [Jan 2002] ref: 
> http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-nonlinear-regression.pdf)
>  
> . For those interested, the following short script highlights my  
> simple test case (though a little sloppy):
>
> mydf <- as.data.frame(cbind(1:6,rev(c(48., 24., 12.,  
> 6., 3., 1.5000)),rev(c( 51.4943, 12.4048, 12.9587, 3.7707,  
> 2.4253, 2.0400;
> colnames(mydf) <- c("X","Y","Y2");
>
> my.mod <- nls(Y2 ~ a*exp(b*X), data=mydf, start=list(a=3.00,b=2.00),  
> trace=T)
>
> plot(mydf[,"X"],residuals(my.mod))
> plot(mydf[,"X"],mydf[,"Y2"], lwd=1)
> lines(mydf[,"X"],fitted.values(my.mod), lwd=2)
>

David Winsemius, MD
West Hartford, CT


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sweave question

2010-11-13 Thread Ralf B
Thank you. The article you cited explains on the last page how this is
done and shows how Sweave is run from within R and it says that it
creates the .tex file.

My last remaining question is now if there is a way to execute this
Sweave tex output by executing Latex from R. In other words, what is
the command to execute latex from within R. Or do I perhaps think to
complcated and there is a single command to create the tex and the
pdf/ps in a single step? At the end, I would like to create everything
between the Sweave document and the final pdf/ps output from within R
without the need to make external calls.

Ralf


On Sat, Nov 13, 2010 at 4:29 PM, Johannes Huesing  wrote:
> Ralf B  [Sat, Nov 13, 2010 at 10:03:49PM CET]:
>> It seems that Sweave is supposed to be used from Latex and R is called
>> during the LaTeX compilation process whenever R chunks appear.
>
> This is not how it works.
>
> In the first page of
> http://www.statistik.lmu.de/~leisch/Sweave/Sweave-Rnews-2002-3.pdf
> that the file is first processed by R before it can be typeset by
> LaTeX.
>
>> What
>> about the other way round? I would like to run it triggered by R. Is
>> this possible?
>
> To my understanding this is how it's done.
>
>> I understand that this does not correspond to the idea
>> of literate programming since it means that there is R code running
>> outside the document,
>
> You lost me here.
>
>> but for my practical approach, I would like to
>> use Sweave more like a report extension at the end of my already
>> existing R scripts that combined a number of diagrams to a pdf file.
>>
>> My second question is, does Sweave create a potential performance
>> bottleneck when used with very big data analysis compared with when
>> using R directly?
>>
>
> Not really, because the only overhead is tangling the Sweave file.
> If it is very big, you may want to process only the parts you have
> changed last. The package weaver seems to come in handy then, see
> http://bioconductor.org/packages/2.6/bioc/vignettes/weaver/inst/doc/weaver_howTo.pdf
> --
> Johannes Hüsing               There is something fascinating about science.
>                              One gets such wholesale returns of conjecture
> mailto:johan...@huesing.name  from such a trifling investment of fact.
> http://derwisch.wikidot.com         (Mark Twain, "Life on the Mississippi")
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Replicate Excel's LOGEST worksheet function in R

2010-11-13 Thread cran . 30 . miller_2555
On Fri, Nov 12, 2010 at 5:28 PM, David Winsemius - dwinsem...@comcast.net
<+cran+miller_2555+c0e7477398.dwinsemius#comcast@spamgourmet.com> wrote:

>
> On Nov 12, 2010, at 5:07 PM, David Winsemius wrote:
>
>
>> On Nov 12, 2010, at 4:22 PM, cran.30.miller_2...@spamgourmet.com wrote:
>>
>>  Hi -
>>>
>>>   I have a dataframe of (x,y) values. I'd like to fit an exponential
>>> curve to the data for further statistical analysis (pretty much the same
>>> functionality provided by Excel's LOGEST worksheet array function). Can
>>> someone point me to the (set of) functions/ package that is best suited
>>> to
>>> provide this functionality? Admittedly, I am a novice in the use of R
>>> statistical functions, so a brief example of how to compute a correlation
>>> coefficient off a fitted exponential curve would be greatly appreciated
>>> (though I could probably work through it over time if I knew the proper R
>>> tools).
>>>
>>>
>> Probably (not seeing a clear description of the LOGEST function):
>>
>> ?exp
>> ?log
>> ?lm
>> ?cor
>>
>>
> I set up a OO.org Calc spreadsheet which has a lot of Excel work-alike
> functions and does have a LOGEST. Giving an argument of x=1:26 and y=exp(x)
> to the first two arguments of LOGEST, I get 1 and e. The OO.org help page
> says
> "FunctionType (optional). If Function_Type = 0, functions in the form y =
> m^x will be calculated. Otherwise, y = b*m^x functions will be calculated."
>
> This might be the equivalent R operation:
>
> > x<-1:26
> > y<-exp(x)
> > lm(log(y) ~ x)
>
> Call:
> lm(formula = log(y) ~ x)
>
> Coefficients:
> (Intercept)x
>  01
>
> > exp(coef(lm(log(y) ~ x)))
> (Intercept)   x
>   1.002.718282
>
> Note this is not a correlation coefficient but rather an (exponentiated)
> regression coefficient.
>
> --
> David Winsemius, MD
> West Hartford, CT
>
>
>
Thanks, but I'm looking to fit an exponential curve (not a linear model).
However, I was able to identify the `nls()` function that works well
(adapted from John Fox's contribution "Nonlinear Regression and Nonlinear
Least 
Squares"
[Jan 2002] ref:
http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-nonlinear-regression.pdf).
For those interested, the following short script highlights my simple test
case (though a little sloppy):

mydf <- as.data.frame(cbind(1:6,rev(c(48., 24., 12., 6.,
3., 1.5000)),rev(c( 51.4943, 12.4048, 12.9587, 3.7707, 2.4253,
2.0400;
colnames(mydf) <- c("X","Y","Y2");

my.mod <- nls(Y2 ~ a*exp(b*X), data=mydf, start=list(a=3.00,b=2.00),
trace=T)

plot(mydf[,"X"],residuals(my.mod))
plot(mydf[,"X"],mydf[,"Y2"], lwd=1)
lines(mydf[,"X"],fitted.values(my.mod), lwd=2)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] barplot3d cutting off labels

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 5:11 PM, emorway wrote:



Hello,

Using the function that is available at:

http://addictedtor.free.fr/graphiques/sources/source_161.R

The following command leads to a plot with labels that are cut off:

barplot3d(c(4.01,4.71,2.03,1.28,1.42,1.76,0,
   6.58,3.25,3.11,2.10,2.05,1.19,1.28,
   6.44,5.50,3.69,4.53,3.33,1.70,2.57,
   6.01,5.36,5.90,6.61,4.67,2.62,3.83,
   10.69,5.90,5.62,6.64,5.71,5.35,3.18,
   8.98,7.30,7.73,7.23,14.10,8.65,4.00,
   13.39,10.91,5.57,7.34,6.03,5.44,3.24),
rows=7, theta = 55, phi = 22, expand=0.9,
col.lab=c("GWTD < 1","1 <= GWTD < 2","2 <= GWTD < 3","3 <= GWTD  
< 4","4

<= GWTD < 5","5 <= GWTD < 6","GWTD > 6"),
row.lab=c("GW ECe < 1","1 <= GW ECe < 2","2 <= GW ECe < 3","3 <=  
GW ECe

< 4","4 <= GW ECe < 5","5 <= GW ECe < 6","GW ECe > 6"),

col 
.bar 
= 
c 
("#FF6633 
","#33","#99CCFF","#9933FF","#44ff58","#CC","#FF"),

z.lab="Soil ECe")

I've also tried to use par(mar=c(5,5,5,5)) or par(oma=c(5,5,5,5)),  
but to no

avail.  Any ideas?


Your labels are being cut off because the plotting is being clipped to  
within the plot region.


First:
par(mar=c(5, 4, 4, 2) + 0.1, xpd=NA)


# Then restore par() settings
par(opar)

--

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] barplot3d cutting off labels

2010-11-13 Thread emorway

Hello, 

Using the function that is available at:

http://addictedtor.free.fr/graphiques/sources/source_161.R

The following command leads to a plot with labels that are cut off:

barplot3d(c(4.01,4.71,2.03,1.28,1.42,1.76,0,
6.58,3.25,3.11,2.10,2.05,1.19,1.28,
6.44,5.50,3.69,4.53,3.33,1.70,2.57,
6.01,5.36,5.90,6.61,4.67,2.62,3.83,
10.69,5.90,5.62,6.64,5.71,5.35,3.18,
8.98,7.30,7.73,7.23,14.10,8.65,4.00,
13.39,10.91,5.57,7.34,6.03,5.44,3.24),
 rows=7, theta = 55, phi = 22, expand=0.9,
 col.lab=c("GWTD < 1","1 <= GWTD < 2","2 <= GWTD < 3","3 <= GWTD < 4","4
<= GWTD < 5","5 <= GWTD < 6","GWTD > 6"), 
 row.lab=c("GW ECe < 1","1 <= GW ECe < 2","2 <= GW ECe < 3","3 <= GW ECe
< 4","4 <= GW ECe < 5","5 <= GW ECe < 6","GW ECe > 6"),

col.bar=c("#FF6633","#33","#99CCFF","#9933FF","#44ff58","#CC","#FF"),
 z.lab="Soil ECe")

I've also tried to use par(mar=c(5,5,5,5)) or par(oma=c(5,5,5,5)), but to no
avail.  Any ideas?

Thanks, 
E

-- 
View this message in context: 
http://r.789695.n4.nabble.com/barplot3d-cutting-off-labels-tp3041309p3041309.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sweave question

2010-11-13 Thread Johannes Huesing
Ralf B  [Sat, Nov 13, 2010 at 10:03:49PM CET]:
> It seems that Sweave is supposed to be used from Latex and R is called
> during the LaTeX compilation process whenever R chunks appear. 

This is not how it works.

In the first page of 
http://www.statistik.lmu.de/~leisch/Sweave/Sweave-Rnews-2002-3.pdf
that the file is first processed by R before it can be typeset by
LaTeX.

> What
> about the other way round? I would like to run it triggered by R. Is
> this possible? 

To my understanding this is how it's done.

> I understand that this does not correspond to the idea
> of literate programming since it means that there is R code running
> outside the document, 

You lost me here.

> but for my practical approach, I would like to
> use Sweave more like a report extension at the end of my already
> existing R scripts that combined a number of diagrams to a pdf file.
> 
> My second question is, does Sweave create a potential performance
> bottleneck when used with very big data analysis compared with when
> using R directly?
> 

Not really, because the only overhead is tangling the Sweave file.
If it is very big, you may want to process only the parts you have
changed last. The package weaver seems to come in handy then, see
http://bioconductor.org/packages/2.6/bioc/vignettes/weaver/inst/doc/weaver_howTo.pdf
-- 
Johannes Hüsing   There is something fascinating about science. 
  One gets such wholesale returns of conjecture 
mailto:johan...@huesing.name  from such a trifling investment of fact.  
  
http://derwisch.wikidot.com (Mark Twain, "Life on the Mississippi")

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sweave question

2010-11-13 Thread Ralf B
It seems that Sweave is supposed to be used from Latex and R is called
during the LaTeX compilation process whenever R chunks appear. What
about the other way round? I would like to run it triggered by R. Is
this possible? I understand that this does not correspond to the idea
of literate programming since it means that there is R code running
outside the document, but for my practical approach, I would like to
use Sweave more like a report extension at the end of my already
existing R scripts that combined a number of diagrams to a pdf file.

My second question is, does Sweave create a potential performance
bottleneck when used with very big data analysis compared with when
using R directly?

Ralf

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Re : interpretation of coefficients in survreg AND obtaining the hazard function for an individual given a set of predictors

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 3:24 PM, Biau David wrote:


Thank you David for your answer,

- grade2 is a factor with 2 categories: "high" and "low"


So "high" would be 1 and low would be 2 by default (alpha ordering)  
factor behavior. as.logical(grade2=="high") reverses that order. If  
you wanted a more "R-isch" solution, try:


stc1$grade2 <- factor(stc1$grade2, levels=c("low", "high) )


- yes as.factor is superfluous; it is just that it avoids warnings  
sometimes. This can be overlooked.
- I will look into Terry Therneau answers; he gives a good  
explanation on how to obtain the hazard for an individual given a  
set of predictors for the Cox model; I will look to see if this  
works for survreg andlook into survreg.distributions if it doesn't

- I'll come back if I can't figure it out.

Thanks again.

Best,

David Biau.


De : David Winsemius 
À : Biau David 
Cc : r help list 
Envoyé le : Sam 13 novembre 2010, 19h 55min 10s
Objet : Re: [R] interpretation of coefficients in survreg AND  
obtaining the hazard function for an individual given a set of  
predictors



On Nov 13, 2010, at 12:51 PM, Biau David wrote:

> Dear R help list,
>
> I am modeling some survival data with coxph and survreg  
(dist='weibull') using

> package survival. I have 2 problems:
>
> 1) I do not understand how to interpret the regression  
coefficients in the
> survreg output and it is not clear, for me, from ?survreg.objects  
how to.


Have you read:

?survreg.distributions  # linked from survreg help

>
> Here is an example of the codes that points out my problem:
> - data is stc1
> - the factor is dichotomous with 'low' and 'high' categories

Not an unambiguous description for the purposes of answering your  
many questions. Please provide data or at the very least: str(stc1)


>
> slr <- Surv(stc1$ti_lr, stc1$ev_lr==1)
>
> mca <- coxph(slr~as.factor(grade2=='high'), data=stc1)

Not sure what that would be returning since we do not know the  
encoding of grade2. If you want an estimate on a subset wouldn't you  
do the subsetting outside of the formula? (You may be reversing the  
order by offering a logical test for grade2.)


> mcb <- coxph(slr~as.factor(grade2), data=stc1)

You have not provided the data or str(stc1), so it is entirely  
possible that as.factor is superfluous in this call.



> mwa <- survreg(slr~as.factor(grade2=='high'), data=stc1,  
dist='weibull',

> scale=0)
> mwb <- survreg(slr~as.factor(grade2), data=stc1, dist='weibull',  
scale=0)

>
>> summary(mca)$coef
>coef
> exp(coef)  se(coef)z  Pr(>|z|)
> as.factor(grade2 == "high")TRUE 0.2416562  1.2733560.2456232
> 0.9838494  0.3251896
>
>> summary(mcb)$coef
>  coefexp(coef)
> se(coef)zPr(>|z|)
> as.factor(grade2)low -0.2416562 0.78532610.2456232-0.9838494
> 0.3251896
>
>> summary(mwa)$coef
> (Intercept)as.factor(grade2 == "high")TRUE
> 7.9068380  -0.4035245
>
>> summary(mwb)$coef
> (Intercept)as.factor(grade2)low
> 7.5033135  0.4035245
>
>
> No problem with the interpretation of the coefs in the cox model.  
However, i do

> not understand why
> a) the coefficients in the survreg model are the opposite  
(negative when the
> other is positive) of what I have in the cox model? are these not  
the log(HR)

> given the categories of these variable?

Probably because the order of the factor got reversed when you  
changed the covariate to logical and them back to factor.


> b) how come the intercept coefficient changes (the scale parameter  
does not

> change)?
>
> 2) My second question relates to the first.
> a) given a model from survreg, say mwa above, how should i do to  
extract the

> base hazard

Answered by Therneau earlier this week and the next question last  
month:


https://stat.ethz.ch/pipermail/r-help/2010-November/259570.html

https://stat.ethz.ch/pipermail/r-help/2010-October/257941.html


> and the hazard of each patient given a set of predictors? With the
> hazard function for the ith individual in the study given by   
h_i(t) =
> exp(\beta'x_i)*\lambda*\gamma*t^{\gamma-1}, it doesn't look like  
to me that

> predict(mwa, type='linear') is \beta'x_i.


> b) since I need the coefficient intercept from the model to obtain  
the scale

> parameter  to obtain the base hazard function as defined in Collett
> (h_0(t)=\lambda*\gamma*t^{\gamma-1}), I am concerned that this  
coefficient
> intercept changes depending on the reference level of the factor  
entered in the
> model. The change is very important when I have more than one  
predictor in the

> model.
>
> Any help would be greatly appreciated,
>
> David Biau.
>


David Winsemius, MD
West Hartford, CT





David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the

[R] Re : interpretation of coefficients in survreg AND obtaining the hazard function for an individual given a set of predictors

2010-11-13 Thread Biau David
Thank you David for your answer,

- grade2 is a factor with 2 categories: "high" and "low" 
- yes as.factor is superfluous; it is just that it avoids warnings sometimes. 
This can be overlooked.
- I will look into Terry Therneau answers; he gives a good explanation on how 
to 
obtain the hazard for an individual given a set of predictors for the Cox 
model; 
I will look to see if this works for survreg andlook into survreg.distributions 
if it doesn't
- I'll come back if I can't figure it out.

Thanks again.

Best,

 David Biau.





De : David Winsemius 

Cc : r help list 
Envoyé le : Sam 13 novembre 2010, 19h 55min 10s
Objet : Re: [R] interpretation of coefficients in survreg AND obtaining the
hazard function for an individual given a set of predictors


On Nov 13, 2010, at 12:51 PM, Biau David wrote:

> Dear R help list,
> 
> I am modeling some survival data with coxph and survreg (dist='weibull') using
> package survival. I have 2 problems:
> 
> 1) I do not understand how to interpret the regression coefficients in the
> survreg output and it is not clear, for me, from ?survreg.objects how to.

Have you read:

?survreg.distributions  # linked from survreg help

> 
> Here is an example of the codes that points out my problem:
> - data is stc1
> - the factor is dichotomous with 'low' and 'high' categories

Not an unambiguous description for the purposes of answering your many 
questions. Please provide data or at the very least: str(stc1)

> 
> slr <- Surv(stc1$ti_lr, stc1$ev_lr==1)
> 
> mca <- coxph(slr~as.factor(grade2=='high'), data=stc1)

Not sure what that would be returning since we do not know the encoding of
grade2. If you want an estimate on a subset wouldn't you do the subsetting
outside of the formula? (You may be reversing the order by offering a logical 
test for grade2.)

> mcb <- coxph(slr~as.factor(grade2), data=stc1)

You have not provided the data or str(stc1), so it is entirely possible that 
as.factor is superfluous in this call.


> mwa <- survreg(slr~as.factor(grade2=='high'), data=stc1, dist='weibull',
> scale=0)
> mwb <- survreg(slr~as.factor(grade2), data=stc1, dist='weibull', scale=0)
> 
>> summary(mca)$coef
> coef
> exp(coef)  se(coef) z  Pr(>|z|)
> as.factor(grade2 == "high")TRUE 0.2416562  1.273356 0.2456232
> 0.9838494  0.3251896
> 
>> summary(mcb)$coef
>   coef exp(coef)
> se(coef) z Pr(>|z|)
> as.factor(grade2)low -0.2416562 0.7853261 0.2456232 -0.9838494
> 0.3251896
> 
>> summary(mwa)$coef
> (Intercept) as.factor(grade2 == "high")TRUE
> 7.9068380   -0.4035245
> 
>> summary(mwb)$coef
> (Intercept) as.factor(grade2)low
> 7.5033135   0.4035245
> 
> 
> No problem with the interpretation of the coefs in the cox model. However, i 
do
> not understand why
> a) the coefficients in the survreg model are the opposite (negative when the
> other is positive) of what I have in the cox model? are these not the log(HR)
> given the categories of these variable?

Probably because the order of the factor got reversed when you changed the
covariate to logical and them back to factor.

> b) how come the intercept coefficient changes (the scale parameter does not
> change)?
> 
> 2) My second question relates to the first.
> a) given a model from survreg, say mwa above, how should i do to extract the
> base hazard

Answered by Therneau earlier this week and the next question last month:

https://stat.ethz.ch/pipermail/r-help/2010-November/259570.html

https://stat.ethz.ch/pipermail/r-help/2010-October/257941.html


> and the hazard of each patient given a set of predictors? With the
> hazard function for the ith individual in the study given by  h_i(t) =
> exp(\beta'x_i)*\lambda*\gamma*t^{\gamma-1}, it doesn't look like to me that
> predict(mwa, type='linear') is \beta'x_i.


> b) since I need the coefficient intercept from the model to obtain the scale
> parameter  to obtain the base hazard function as defined in Collett
> (h_0(t)=\lambda*\gamma*t^{\gamma-1}), I am concerned that this coefficient
> intercept changes depending on the reference level of the factor entered in 
the
> model. The change is very important when I have more than one predictor in the
> model.
> 
> Any help would be greatly appreciated,
> 
> David Biau.
> 


David Winsemius, MD
West Hartford, CT


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to turn the colour off for lattice graph?

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 3:16 PM, Shige Song wrote:


With the following code:
---
ltheme <- canonical.theme(color = FALSE) ## in-built B&W theme
ltheme$strip.background$col <- "transparent" ## change strip bg
lattice.options(default.theme = ltheme)  ## set as default
---
I was able to get a nice-looking b&w figure from the X11 display but
still cannot get the same thing from tikz output. I guess it's
something about the connection between lattice and tikz that I was
unable to identify.


You do  not seem to be reading for meaning (and are still not offering  
a workable example) . The plotting function "plot" is not a lattice  
function. Why should the output of plot to the tikzDevice be modified  
by a call to a lattice?


--
David.


Shige

On Sat, Nov 13, 2010 at 11:31 AM, David Winsemius
 wrote:


On Nov 13, 2010, at 10:07 AM, Shige Song wrote:


Dear All,

I am trying to plot a lattice figure and include it in a LaTeX
document via the TikZDevice package. I think the journal I am
submitting to does not like colour figure, so I need to get rid of  
all

the colours in the figure. If I directly generate PDF or EPS, the
option "trellis.device(color=FALSE)", but when I do:

trellis.device(color=FALSE)
tikz("~/project/figure.tex")
plot(...)


I wouldn't expect to see output the plot function affected by  
modifications

to the trellis device. Have your tried using par()?

(There are plot methods that use lattice plotting functions and I  
suppose
tikz might be one but I do not see that function in my loaded  
packages which

include lattice by default.)



dev.off()

The resulted graph has black and white dots and lines but still has
the default colour in the window frame. Any ideas about how to get  
rid

that colour as well?

Many thanks.

Best,
Shige




David Winsemius, MD
West Hartford, CT




David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] truncate integers

2010-11-13 Thread Tyler Dean Rudolph
Thank you David; this seems to perform the required task with relative ease,
which is all I could ask for at the moment!

Tyler


On Sat, Nov 13, 2010 at 1:32 PM, David Winsemius wrote:

>
> On Nov 13, 2010, at 12:34 PM, T.D. Rudolph wrote:
>
>
>> Is there any really easy way to truncate integers with several consecutive
>> digits without rounding and without converting from numeric to character
>> (using strsplit, etc.)??  Something along these lines:
>>
>> e.g. = 456
>>
>> truncfun(e.g., location=1)
>> = 4
>>
>> truncfun(e.g., location=1:2)
>> = 45
>>
>> truncfun(e.g., location=2:3)
>> = 56
>>
>> truncfun(e.g., location=3)
>> = 6
>>
>> It's one thing using floor(x/100) to get 4 or floor(x/10) to get 45, but
>> I'd
>> like to return only 5 or only 6, for example, in cases where I don't know
>> what the numbers are going to be.
>>
>> I'm sure there is something very logical that I am missing, but my code is
>> getting too complicated and I can't seem to find a parsimonious solution.
>>
>
> Modulo arithmetic? (not the same location arguments as you specified but
> should give you ideas to work with:
>
>  trncint <- function(x, left=0,length=0)  (x %% 10^(left) ) %/%
> 10^(left-length)
>
> > trncint(456, 2,2)
> [1] 56
> > trncint(456, 3,2)
> [1] 45
> > trncint(456, 3,1)
> [1] 4
> > trncint(456, 2,1)
> [1] 5
> > trncint(456, 3,1)
> [1] 4
>
> > trncint(456, 1,1)
> [1] 6
>
> --
>
> David Winsemius, MD
> West Hartford, CT
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Clark method enhancement to ChainLadder package

2010-11-13 Thread Daniel Murphy
Greetings,

Version 0.1.4-0 of the ChainLadder package is available via CRAN and
http://code.google.com/p/chainladder/

The ChainLadder package, which is focused on claims reserving methods
typically carried out by property/casualty insurance actuaries, has recently
been enhanced to implement the methods in David Clark's 2003 CAS (Casualty
Actuarial Society) *Forum* paper "LDF Curve-Fitting and Stochastic
Reserving: A Maximum Likelihood Approach." Clark's methods are ready to be
put to use on a wide variety of "triangles".

To see a demonstration of the example in Clark's paper (ChainLadder's
"GenIns" is the same dataset), run

> library(ChainLadder)
> demo(clarkDemo)

This will run the "LDF Method" with the loglogistic growth function limited
to 20 years of development, per Clark's example. It will also run the
"CapeCod Method" with the weibull growth function. In both cases, per
Clark's recommendation, the option is selected to fit the growth function to
the average date of loss of each accident year (a.k.a., year of origin). The
demo output consists of two displays: the table of expected values, standard
errors and CV's as on page 65 of the paper, and four residual plots.

Type ?ClarkLDF and ?ClarkCapeCod for help on running Clark's two methods.

Please contact the author,  Daniel Murphy at chiefmurph _at_ yahoo.com, with
questions, suggestions, ideas or problems.

If anyone is interested in collaborating to develop additional ChainLadder
package functionality, do not hesitate to contact Dan or any of the other
ChainLadder authors. You don't have to be an R expert or an actuary to be
helpful!

More details:

Version 0.1.4-0

NEW FEATURES

  o New implementation of the methods in David Clark's "LDF Curve Fitting"
paper in the 2003 Forum by Daniel Murphy.
- Includes LDF and CapeCod methods (functions 'ClarkLDF' and
'ClarkCapeCod', respectively)
- Programmed to handle loglogistic and weibull growth functions
- Printing an object returned by the function results in a table
similar to that on p. 65 of the paper
- Plotting such an object results in four residual plots, including
a Q-Q plot with the results of the Shapiro-Wilk test

Cheers,
The ChainLadder Team

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to turn the colour off for lattice graph?

2010-11-13 Thread Shige Song
With the following code:
---
ltheme <- canonical.theme(color = FALSE) ## in-built B&W theme
ltheme$strip.background$col <- "transparent" ## change strip bg
lattice.options(default.theme = ltheme)  ## set as default
---
I was able to get a nice-looking b&w figure from the X11 display but
still cannot get the same thing from tikz output. I guess it's
something about the connection between lattice and tikz that I was
unable to identify.

Shige

On Sat, Nov 13, 2010 at 11:31 AM, David Winsemius
 wrote:
>
> On Nov 13, 2010, at 10:07 AM, Shige Song wrote:
>
>> Dear All,
>>
>> I am trying to plot a lattice figure and include it in a LaTeX
>> document via the TikZDevice package. I think the journal I am
>> submitting to does not like colour figure, so I need to get rid of all
>> the colours in the figure. If I directly generate PDF or EPS, the
>> option "trellis.device(color=FALSE)", but when I do:
>>
>> trellis.device(color=FALSE)
>> tikz("~/project/figure.tex")
>> plot(...)
>
> I wouldn't expect to see output the plot function affected by modifications
> to the trellis device. Have your tried using par()?
>
> (There are plot methods that use lattice plotting functions and I suppose
> tikz might be one but I do not see that function in my loaded packages which
> include lattice by default.)
>
>
>> dev.off()
>>
>> The resulted graph has black and white dots and lines but still has
>> the default colour in the window frame. Any ideas about how to get rid
>> that colour as well?
>>
>> Many thanks.
>>
>> Best,
>> Shige
>
>
>
> David Winsemius, MD
> West Hartford, CT
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] aggregate with missing values, data.frame vs formula

2010-11-13 Thread David Freedman

It seems that the formula and data.frame forms of aggregate handle missing
values differently.  For example, 

(d=data.frame(sex=rep(0:1,each=3),
wt=c(100,110,120,200,210,NA),ht=c(10,20,NA,30,40,50)))
x1=aggregate(d, by = list(d$sex), FUN = mean);
names(x1)[3:4]=c('list.wt','list.ht')
x2=aggregate(cbind(wt,ht)~sex,FUN=mean,data=d);
names(x2)[2:3]=c('form.wt','form.ht')
cbind(x1,x2)

 Group.1 sex list.wt list.ht sex form.wt form.ht
1   0   0 110  NA   0 105  15
2   1   1  NA  401 205  35

So, the data.frame form deletes gives an NA if there are missing values in
the group for the variable with missing values.  But, the formula form
deletes the entire row (missing and non-missing values) if any of the values
are missing.  Is this what was intended or the best option ?

thanks, david freedman
-- 
View this message in context: 
http://r.789695.n4.nabble.com/aggregate-with-missing-values-data-frame-vs-formula-tp3041198p3041198.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] LAPACK lib problem, lme4, lastest R, Linux

2010-11-13 Thread Ron Burns
I just dumped Vista off a laptop and installed Ubuntu 10.10 (latest 
release) as the single operating system. I did all of the updates and 
then installed emacs and ess. Next I installed R by following the the 
usual instructions on the CRAN site. At this point all is working I am 
now in the process of installing the packages that I normally have 
installed. I am having a problem with the LAPACK libraries when 
installing lme4, but the problem is not unique to lme4. It is either a 
link to or missing LAPACK libs. I am at a loss as what I am doing wrong 
since I have started with a completely clean machine and am not trying 
to anything special.


---HERE is the R startup:
  (ron) R

R version 2.12.0 (2010-10-15)
Copyright (C) 2010 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: i686-pc-linux-gnu (32-bit)

---HERE is the final output from trying to install lme4:
* installing *source* package lme4 ...
** libs
gcc -I/usr/share/R/include   -I"/usr/lib/R/library/Matrix/include" 
-I"/usr/lib/R/library/stats/include"   -fpic  -std=gnu99 -O3 -pipe  -g 
-c init.c -o init.o
gcc -I/usr/share/R/include   -I"/usr/lib/R/library/Matrix/include" 
-I"/usr/lib/R/library/stats/include"   -fpic  -std=gnu99 -O3 -pipe  -g 
-c lmer.c -o lmer.o
gcc -I/usr/share/R/include   -I"/usr/lib/R/library/Matrix/include" 
-I"/usr/lib/R/library/stats/include"   -fpic  -std=gnu99 -O3 -pipe  -g 
-c local_stubs.c -o local_stubs.o
gcc -shared -o lme4.so init.o lmer.o local_stubs.o -llapack -lf77blas 
-latlas -lgfortran -lm -L/usr/lib/R/lib -lR

/usr/bin/ld: cannot find -lf77blas
/usr/bin/ld: cannot find -latlas
collect2: ld returned 1 exit status
make: *** [lme4.so] Error 1
ERROR: compilation failed for package lme4
* removing /usr/local/lib/R/site-library/lme4

-BUT
[ 48 ]  (ron) /usr/bin/R CMD config LAPACK_LIBS
-llapack

-AND
 (ron) dpkg -l | grep lapack
ii  liblapack-dev3.2.1-8 
   library of linear algebra routines 3 - static version
ii  liblapack3gf 3.2.1-8 
   library of linear algebra routines 3 - shared version

[ 44 ]  (ron)

I did see a message indicating "Also do 'ldd 
/usr/lib/R/bin/exec/R' and make sure you do _not_ have a depends on 
Rlapack.so.  ":


[ 47 ]  (ron) ldd /usr/lib/R/bin/exec/R
linux-gate.so.1 =>  (0x00533000)
libR.so => /usr/lib/libR.so (0x0073e000)
libc.so.6 => /lib/libc.so.6 (0x0011)
libf77blas.so.3gf => /usr/lib/libf77blas.so.3gf (0x00ed4000)
libatlas.so.3gf => /usr/lib/libatlas.so.3gf (0x0026e000)
libgfortran.so.3 => /usr/lib/libgfortran.so.3 (0x00cca000)
libm.so.6 => /lib/libm.so.6 (0x0065f000)
libreadline.so.6 => /lib/libreadline.so.6 (0x00534000)
libpcre.so.3 => /lib/libpcre.so.3 (0x00f5b000)
liblzma.so.2 => /usr/lib/liblzma.so.2 (0x00568000)
libz.so.1 => /lib/libz.so.1 (0x0058b000)
libdl.so.2 => /lib/libdl.so.2 (0x005a)
/lib/ld-linux.so.2 (0x00e67000)
libcblas.so.3gf => /usr/lib/libcblas.so.3gf (0x005a4000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x005c4000)
libpthread.so.0 => /lib/libpthread.so.0 (0x005e)
libncurses.so.5 => /lib/libncurses.so.5 (0x005fa000)

[ 41 ]  (ron) ls /usr/lib/libatlas*
/usr/lib/libatlas.so.3gf@
[ 42 ]  (ron) ls /usr/lib/libf77blas*
/usr/lib/libf77blas.so.3gf@

 so these are there and linked OK and there is no Rlapack.so The 
links look OK.


Should I be linking to static libs and if so they do not seem to on my 
machine anywhere. ( a "find . -name "*f77blas*" (or on atlas) turned up 
so's only)


I am at a loss as to where to go from here.

Thank you all for your consideration.
Ron Burns

--
R. R. Burns
Physicist (Retired)
Oceanside, CA

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] interpretation of coefficients in survreg AND obtaining the hazard function for an individual given a set of predictors

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 12:51 PM, Biau David wrote:


Dear R help list,

I am modeling some survival data with coxph and survreg  
(dist='weibull') using

package survival. I have 2 problems:

1) I do not understand how to interpret the regression coefficients  
in the
survreg output and it is not clear, for me, from ?survreg.objects  
how to.


Have you read:

?survreg.distributions  # linked from survreg help



Here is an example of the codes that points out my problem:
- data is stc1
- the factor is dichotomous with 'low' and 'high' categories


Not an unambiguous description for the purposes of answering your many  
questions. Please provide data or at the very least: str(stc1)




slr <- Surv(stc1$ti_lr, stc1$ev_lr==1)

mca <- coxph(slr~as.factor(grade2=='high'), data=stc1)


Not sure what that would be returning since we do not know the  
encoding of grade2. If you want an estimate on a subset wouldn't you  
do the subsetting outside of the formula? (You may be reversing the  
order by offering a logical test for grade2.)



mcb <- coxph(slr~as.factor(grade2), data=stc1)


You have not provided the data or str(stc1), so it is entirely  
possible that as.factor is superfluous in this call.



mwa <- survreg(slr~as.factor(grade2=='high'), data=stc1,  
dist='weibull',

scale=0)
mwb <- survreg(slr~as.factor(grade2), data=stc1, dist='weibull',  
scale=0)



summary(mca)$coef

coef
exp(coef)  se(coef) z  Pr(>|z|)
as.factor(grade2 == "high")TRUE 0.2416562  1.273356 0.2456232
0.9838494  0.3251896


summary(mcb)$coef

  coef exp(coef)
se(coef) z Pr(>|z|)
as.factor(grade2)low -0.2416562 0.7853261 0.2456232 -0.9838494
0.3251896


summary(mwa)$coef

(Intercept) as.factor(grade2 == "high")TRUE
7.9068380   -0.4035245


summary(mwb)$coef

(Intercept) as.factor(grade2)low
7.5033135   0.4035245


No problem with the interpretation of the coefs in the cox model.  
However, i do

not understand why
a) the coefficients in the survreg model are the opposite (negative  
when the
other is positive) of what I have in the cox model? are these not  
the log(HR)

given the categories of these variable?


Probably because the order of the factor got reversed when you changed  
the covariate to logical and them back to factor.


b) how come the intercept coefficient changes (the scale parameter  
does not

change)?

2) My second question relates to the first.
a) given a model from survreg, say mwa above, how should i do to  
extract the

base hazard


Answered by Therneau earlier this week and the next question last month:

https://stat.ethz.ch/pipermail/r-help/2010-November/259570.html

https://stat.ethz.ch/pipermail/r-help/2010-October/257941.html



and the hazard of each patient given a set of predictors? With the
hazard function for the ith individual in the study given by  h_i(t) =
exp(\beta'x_i)*\lambda*\gamma*t^{\gamma-1}, it doesn't look like to  
me that

predict(mwa, type='linear') is \beta'x_i.



b) since I need the coefficient intercept from the model to obtain  
the scale

parameter  to obtain the base hazard function as defined in Collett
(h_0(t)=\lambda*\gamma*t^{\gamma-1}), I am concerned that this  
coefficient
intercept changes depending on the reference level of the factor  
entered in the
model. The change is very important when I have more than one  
predictor in the

model.

Any help would be greatly appreciated,

David Biau.




David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Summing functions of lists

2010-11-13 Thread Phil Spector

Does this version of y do what you want?

y=function(j)sum(sapply(1:3,function(i)x(i,j)))


- Phil Spector
 Statistical Computing Facility
 Department of Statistics
 UC Berkeley
 spec...@stat.berkeley.edu


On Sat, 13 Nov 2010, Hiba Baroud wrote:



Hi

I'm trying to sum functions of lists with different lengths. Here is a 
simplified example of the problem:

r=list(1:3,1:5,1:2)

r
[[1]]
[1] 1 2 3
[[2]]
[1] 1 2 3 4 5
[[3]]
[1] 1 2

x=function(i,j) sum(j*r[[i]])# x is a function of 
two parameters: i & j

y=function(j) # y is the 
sum of x over i
+ {
+ s=seq(from=1,to=3,by=1)
+ sum(x(s,j))
+ }

y(1)
Error in r[[i]] : recursive indexing failed at level 2

The error is clearly due to the lists; I actually tried this code with 
functions of vectors and scalars and it worked perfectly.
I tried to use means of summing lists by recursion but it does not work with 
functions of lists, for e.g.:

y=function(j) x(1,j)
for(i in 2:3)
+ {
+ y=function(j) y(j)+x(i,j)
+ }

y(1)
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?

Is there a way to perform this summation?

Thanks!

H.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] truncate integers

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 12:34 PM, T.D. Rudolph wrote:



Is there any really easy way to truncate integers with several  
consecutive
digits without rounding and without converting from numeric to  
character

(using strsplit, etc.)??  Something along these lines:

e.g. = 456

truncfun(e.g., location=1)
= 4

truncfun(e.g., location=1:2)
= 45

truncfun(e.g., location=2:3)
= 56

truncfun(e.g., location=3)
= 6

It's one thing using floor(x/100) to get 4 or floor(x/10) to get 45,  
but I'd
like to return only 5 or only 6, for example, in cases where I don't  
know

what the numbers are going to be.

I'm sure there is something very logical that I am missing, but my  
code is
getting too complicated and I can't seem to find a parsimonious  
solution.


Modulo arithmetic? (not the same location arguments as you specified  
but should give you ideas to work with:


 trncint <- function(x, left=0,length=0)  (x %% 10^(left) ) %/%  
10^(left-length)


> trncint(456, 2,2)
[1] 56
> trncint(456, 3,2)
[1] 45
> trncint(456, 3,1)
[1] 4
> trncint(456, 2,1)
[1] 5
> trncint(456, 3,1)
[1] 4

> trncint(456, 1,1)
[1] 6

--

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Summing functions of lists

2010-11-13 Thread Hiba Baroud

Hi
 
I'm trying to sum functions of lists with different lengths. Here is a 
simplified example of the problem:
 
r=list(1:3,1:5,1:2)
 
r
[[1]]
[1] 1 2 3
[[2]]
[1] 1 2 3 4 5
[[3]]
[1] 1 2
 
x=function(i,j) sum(j*r[[i]])# x is a function 
of two parameters: i & j
 
y=function(j) # y is the 
sum of x over i
+ {
+ s=seq(from=1,to=3,by=1)
+ sum(x(s,j))
+ }

y(1)
Error in r[[i]] : recursive indexing failed at level 2
 
The error is clearly due to the lists; I actually tried this code with 
functions of vectors and scalars and it worked perfectly.
I tried to use means of summing lists by recursion but it does not work with 
functions of lists, for e.g.:
 
y=function(j) x(1,j)
for(i in 2:3)
+ {
+ y=function(j) y(j)+x(i,j)
+ }

y(1)
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?

Is there a way to perform this summation?
 
Thanks!
 
H. 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RMySQL on Windows 2008 64 Bit -Help!

2010-11-13 Thread Uwe Ligges

Well, two comments:

1. you bought a commercial version of R from Revolution Analytics, hence 
you probably want to rely on the Revolution service?



2. Nobody looked at the error message you got, let me cite the relevant 
two lines:


>>> checking for $MYSQL_HOME... C:/Program Files/MySQL/MySQL Server 5.0
>>> test: Files/MySQL/MySQL: unknown operand

This shows you have problems with the blanks in the path name.
Either you forgot to quote the path when setting the environment 
variable or RMySQL the package maintainer has not assumed blanks in the 
path.
I'd suggest to quote or to use thew 8.3 file name convention in order to 
workaround.


Best,
Uwe Ligges





On 13.11.2010 14:52, Spencer Graves wrote:

Hello:


I suggest try R-SIG-DB email list. They focus specifically on databases,
and you might get a better response there.


Sorry I can't help more.
Spencer


On 11/13/2010 1:12 AM, Santosh Srinivas wrote:

I could do that but will have to change all my code.

It would be great if I could get RMySQL on the 64 bit machine.

-Original Message-
From: Ajay Ohri [mailto:ohri2...@gmail.com]
Sent: 13 November 2010 14:13
To: Santosh Srinivas
Cc: r-help@r-project.org
Subject: Re: [R] RMySQL on Windows 2008 64 Bit -Help!

did you try the RODBC package as well.

Regards

Ajay

Websites-
http://decisionstats.com
http://dudeofdata.com


Linkedin- www.linkedin.com/in/ajayohri





On Sat, Nov 13, 2010 at 9:22 AM, Santosh Srinivas
 wrote:

Dear Group,

I'm having lots of problems getting RMySQL on a 64 bit machine. I
followed
all instructions available but couldn't get it working yet! Please help.
See the output below.

I did a install of RMySQL binary from the revolution cran source. It
seems
to have unpacked fine but gives this error when I call RMySQL
Error: package 'RMySQL' is not installed for 'arch=x64'


I set this environment variable on the windows path

Sys.getenv('MYSQL_HOME')

MYSQL_HOME
"C:/Program Files/MySQL/MySQL Server 5.0"


install.packages('RMySQL',type='source')

trying URL
'http://cran.revolutionanalytics.com/src/contrib/RMySQL_0.7-5.tar.gz'
Content type 'application/x-gzip' length 160769 bytes (157 Kb)
opened URL
downloaded 157 Kb

* installing *source* package 'RMySQL' ...
checking for $MYSQL_HOME... C:/Program Files/MySQL/MySQL Server 5.0
test: Files/MySQL/MySQL: unknown operand
ERROR: configuration failed for package 'RMySQL'
* removing 'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'
* restoring previous
'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'

The downloaded packages are in



'C:\Users\Administrator\AppData\Local\Temp\2\RtmpvGgrzb\downloaded_packages'


Warning message:
In install.packages("RMySQL", type = "source") :
installation of package 'RMySQL' had non-zero exit status

sessionInfo()

R version 2.11.1 (2010-05-31)
x86_64-pc-mingw32

locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United
States.1252 LC_MONETARY=English_United States.1252 LC_NUMERIC=C

[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] Revobase_4.0.0 RevoScaleR_1.0-0 lattice_0.18-8

loaded via a namespace (and not attached):
[1] grid_2.11.1 tools_2.11.1

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide

http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] interpretation of coefficients in survreg AND obtaining the hazard function for an individual given a set of predictors

2010-11-13 Thread Biau David
Dear R help list,

I am modeling some survival data with coxph and survreg (dist='weibull') using 
package survival. I have 2 problems:

1) I do not understand how to interpret the regression coefficients in the 
survreg output and it is not clear, for me, from ?survreg.objects how to.

Here is an example of the codes that points out my problem:
- data is stc1
- the factor is dichotomous with 'low' and 'high' categories

slr <- Surv(stc1$ti_lr, stc1$ev_lr==1)

mca <- coxph(slr~as.factor(grade2=='high'), data=stc1)
mcb <- coxph(slr~as.factor(grade2), data=stc1)
mwa <- survreg(slr~as.factor(grade2=='high'), data=stc1, dist='weibull', 
scale=0)
mwb <- survreg(slr~as.factor(grade2), data=stc1, dist='weibull', scale=0)

> summary(mca)$coef
 coef 
exp(coef)  se(coef) z  Pr(>|z|)
as.factor(grade2 == "high")TRUE 0.2416562  1.273356 0.2456232 
0.9838494  0.3251896

> summary(mcb)$coef
   coef exp(coef)  
se(coef) z Pr(>|z|)
as.factor(grade2)low -0.2416562 0.7853261 0.2456232 -0.9838494 
0.3251896

> summary(mwa)$coef
(Intercept) as.factor(grade2 == "high")TRUE 
7.9068380   -0.4035245 

> summary(mwb)$coef
(Intercept) as.factor(grade2)low 
7.5033135   0.4035245 


No problem with the interpretation of the coefs in the cox model. However, i do 
not understand why
a) the coefficients in the survreg model are the opposite (negative when the 
other is positive) of what I have in the cox model? are these not the log(HR) 
given the categories of these variable?
b) how come the intercept coefficient changes (the scale parameter does not 
change)?

2) My second question relates to the first.
a) given a model from survreg, say mwa above, how should i do to extract the 
base hazard and the hazard of each patient given a set of predictors? With the 
hazard function for the ith individual in the study given by  h_i(t) = 
exp(\beta'x_i)*\lambda*\gamma*t^{\gamma-1}, it doesn't look like to me that 
predict(mwa, type='linear') is \beta'x_i.
b) since I need the coefficient intercept from the model to obtain the scale 
parameter  to obtain the base hazard function as defined in Collett 
(h_0(t)=\lambda*\gamma*t^{\gamma-1}), I am concerned that this coefficient 
intercept changes depending on the reference level of the factor entered in the 
model. The change is very important when I have more than one predictor in the 
model.

Any help would be greatly appreciated,

David Biau.



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem retrieving data from R2InBUGS

2010-11-13 Thread Uwe Ligges

This is strange and does not happen for me.

Anyway, if it is a problem at all, then I doubt it is in R2WinBUGS: 
R2WinBUGS asks WinBUGS to write the coda files and afterwards calls the 
following function


read.bugs <- function(codafiles, ...){
if(!is.R() && !require("coda"))
stop("package 'coda' is required to use this function")
mcmc.list(lapply(codafiles, read.coda,
 index.file = file.path(dirname(codafiles[1]), 
"codaIndex.txt"),

 ...))
}

that keeps the order from the codaIndex file and just calls other 
functions such as read.coda and mcmc.list() from the coda package.


Anyway, if the problem persists for you, you may want to send me an 
example that is reproducible, i.e. including data, model files and what 
you function calls in R were 


Best,
Uwe Ligges






On 12.11.2010 20:55, Barth B. Riley wrote:

Dear list

I am calling the functiton bugs() provided by R2WinBugs to performs an IRT 
analysis. The function returns a set of estimated parameters over n 
replications/iterations. For each replication, two sets of person measures 
(theta1 and theta2) and two sets of item difficulty parameters (diff1 and 
diff2) are returned. The code used to obtain these estimates is as follows:

sim<- 
bugs(data,init,model.file="drift.bugs",parameters=c("theta1","theta2","diff1","diff2"),n.chains=n.chains,n.iter=n.reps,n.burnin=1000,bin=1,n.thin=1,digits=3,
 codaPkg=T)
sim.list<- read.bugs(sim)

However, when I read the coda files using read.bugs the order of the columns in 
the parameter  matrix are as follows:

   [1] "deviance""diff1[1]""diff1[10]"   "diff1[100]"  "diff1[11]"   "diff1[12]"  
 "diff1[13]"
[8] "diff1[14]"   "diff1[15]"   "diff1[16]"   "diff1[17]"   "diff1[18]"   "diff1[19]" 
  "diff1[2]"
...
...
.. "diff2[10]"   "diff2[100]" 
 "diff2[11]"
  [106] "diff2[12]"   "diff2[13]"   "diff2[14]"   "diff2[15]"   "diff2[16]"   "diff2[17]" 
  "diff2[18]"
  [113] "diff2[19]"   "diff2[2]""diff2[20]"   "diff2[21]"   "diff2[22]"   "diff2[23]" 
  "diff2[24]"
...
...
. "theta1[1]" 
  "theta1[10]"
  [204] "theta1[100]" "theta1[101]" "theta1[102]" "theta1[103]" "theta1[104]" 
"theta1[105]" "theta1[106]"
  [211] "theta1[107]" "theta1[108]" "theta1[109]" "theta1[11]"  "theta1[110]" 
"theta1[111]" "theta1[112]"

The columns, in other words, are not in numerical order according to the index 
of the item difficulty or person measure parameter. Rather they appear to be in 
text-sorted order (e.g., 1, 10, 100...). Any ideas as to what might be going on?


Thanks

Barth

PRIVILEGED AND CONFIDENTIAL INFORMATION
This transmittal and any attachments may contain PRIVILEGED AND
CONFIDENTIAL information and is intended only for the use of the
addressee. If you are not the designated recipient, or an employee
or agent authorized to deliver such transmittals to the designated
recipient, you are hereby notified that any dissemination,
copying or publication of this transmittal is strictly prohibited. If
you have received this transmittal in error, please notify us
immediately by replying to the sender and delete this copy from your
system. You may also call us at (309) 827-6026 for assistance.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] truncate integers

2010-11-13 Thread T.D. Rudolph

Is there any really easy way to truncate integers with several consecutive
digits without rounding and without converting from numeric to character
(using strsplit, etc.)??  Something along these lines:

e.g. = 456

truncfun(e.g., location=1)
= 4

truncfun(e.g., location=1:2)
= 45

truncfun(e.g., location=2:3)
= 56

truncfun(e.g., location=3)
= 6

It's one thing using floor(x/100) to get 4 or floor(x/10) to get 45, but I'd
like to return only 5 or only 6, for example, in cases where I don't know
what the numbers are going to be.

I'm sure there is something very logical that I am missing, but my code is
getting too complicated and I can't seem to find a parsimonious solution.

Tyler
-- 
View this message in context: 
http://r.789695.n4.nabble.com/truncate-integers-tp3041086p3041086.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R2WinBUGS: Error in bugs(program="openbugs") but not openbugs()

2010-11-13 Thread Uwe Ligges



On 13.11.2010 18:05, Uwe Ligges wrote:

I found the problem which is a scoping issue in BRugs::bugsData() and
will fix it for the next release.

For now, you can workaround by calling your parameters (in this case
particularly beta) with a names that are different from R function names
(e.g. call it beta1).



Errr, nonsense: much better is to pass the data directly rather than 
just passing the names.


Just replace the line where you define data by:

data <- list(y=y, n=n, alpha=alpha, beta=beta)

Best,
Uwe Ligges






Best wishes,
Uwe






On 11.11.2010 17:48, Yuelin Li wrote:

I get an error when I call bugs(..., program="OpenBUGS",
bugs.directory="c:/Program Files/OpenBUGS/OpenBUGS312"),
expecting, as suggested in help(bugs), that it would fit the
model with openbugs() via BRugs.

> help(bugs)
... either winbugs/WinBUGS or openbugs/OpenBUGS, the latter
makes use of function openbugs and requires the CRAN package
BRugs. ...

However, it works fine when I directly call openbugs().


 > All

other things are exactly the same. It seems that (in my
settings) bugs(program="OpenBUGS") works differently than
openbugs(). Am I doing something wrong with bugs()? Or
there is something wrong with my OpenBUGS installation?

I am using R-2.12.0, R2WinBUGS 2.1-16 (2009-11-06), OpenBUGS
3.1.2 rev 668 (2010-09-28), and BRugs 0.5-3 (2009-11-06) on
a Windows XP machine.

Yuelin.

--- R file -
require(R2WinBUGS)
require(BRugs)
# Example in Albert (2007). Bayesian Computation with R. Springer.
# pp. 237-238. Prior = beta(0.5, 0.5), observe Binom(n, p)
# y=7 successes out of a sample of n=50. Estimate p.
y<- 7
n<- 50
alpha<- 1.0
beta<- 1.0
data<- list("y", "n", "alpha", "beta")
inits<- function() { list(p = runif(1)) }
param<- "p"
# this works
Albert.bugs<- openbugs(data=data, inits=inits,
parameters.to.save=param, model.file = "C:/tryR/WinBUGS/Albert11.txt",
n.chains=3, n.iter=500)
print(Albert.bugs, digits.summary = 4)
# this fails
Albert.bugs<- bugs(data=data, inits=inits, parameters.to.save=param,
model.file="C:/tryR/WinBUGS/Albert11.txt", n.chains=3, n.iter=500,
program="OpenBUGS")

--- BUGS file: Albert11.txt --
model
{
y ~ dbin(p, n)
p ~ dbeta( alpha, beta )
}



=

Please note that this e-mail and any files transmitted with it may be
privileged, confidential, and protected from disclosure under
applicable law. If the reader of this message is not the intended
recipient, or an employee or agent responsible for delivering this
message to the intended recipient, you are hereby notified that any
reading, dissemination, distribution, copying, or other use of this
communication or any of its attachments is strictly prohibited. If
you have received this communication in error, please notify the
sender immediately by replying to this message and deleting this
message, any attachments, and all copies and backups from your
computer.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R2WinBUGS: Error in bugs(program="openbugs") but not openbugs()

2010-11-13 Thread Uwe Ligges
I found the problem which is a scoping issue in BRugs::bugsData() and 
will fix it for the next release.


For now, you can workaround by calling your parameters (in this case 
particularly beta) with a names that are different from R function names 
(e.g. call it beta1).


Best wishes,
Uwe






On 11.11.2010 17:48, Yuelin Li wrote:

I get an error when I call bugs(..., program="OpenBUGS",
bugs.directory="c:/Program Files/OpenBUGS/OpenBUGS312"),
expecting, as suggested in help(bugs), that it would fit the
model with openbugs() via BRugs.

  >  help(bugs)
  ... either winbugs/WinBUGS or openbugs/OpenBUGS, the latter
  makes use of function openbugs and requires the CRAN package
  BRugs. ...

However, it works fine when I directly call openbugs().


> All

other things are exactly the same.  It seems that (in my
settings) bugs(program="OpenBUGS") works differently than
openbugs().  Am I doing something wrong with bugs()?   Or
there is something wrong with my OpenBUGS installation?

I am using R-2.12.0, R2WinBUGS 2.1-16 (2009-11-06), OpenBUGS
3.1.2 rev 668 (2010-09-28), and BRugs 0.5-3 (2009-11-06)  on
a Windows XP machine.

Yuelin.

--- R file -
require(R2WinBUGS)
require(BRugs)
# Example in Albert (2007).  Bayesian Computation with R.  Springer.
# pp. 237-238.  Prior = beta(0.5, 0.5), observe Binom(n, p)
# y=7 successes out of a sample of n=50.  Estimate p.
y<- 7
n<- 50
alpha<- 1.0
beta<- 1.0
data<- list("y", "n", "alpha", "beta")
inits<- function() { list(p = runif(1)) }
param<- "p"
# this works
Albert.bugs<- openbugs(data=data, inits=inits, parameters.to.save=param, model.file = 
"C:/tryR/WinBUGS/Albert11.txt", n.chains=3, n.iter=500)
print(Albert.bugs, digits.summary = 4)
# this fails
Albert.bugs<- bugs(data=data, inits=inits, parameters.to.save=param, 
model.file="C:/tryR/WinBUGS/Albert11.txt", n.chains=3, n.iter=500, 
program="OpenBUGS")

--- BUGS file: Albert11.txt --
model
{
y ~ dbin(p, n)
p ~ dbeta( alpha, beta )
}



  =

  Please note that this e-mail and any files transmitted with it may be
  privileged, confidential, and protected from disclosure under
  applicable law. If the reader of this message is not the intended
  recipient, or an employee or agent responsible for delivering this
  message to the intended recipient, you are hereby notified that any
  reading, dissemination, distribution, copying, or other use of this
  communication or any of its attachments is strictly prohibited.  If
  you have received this communication in error, please notify the
  sender immediately by replying to this message and deleting this
  message, any attachments, and all copies and backups from your
  computer.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Question about the "effects" package

2010-11-13 Thread Shige Song
Dear John,

This is exactly what I need, thank you so much for the tip (and thank
you so much for this wonderful package too).

Best,
Shige

On Sat, Nov 13, 2010 at 10:50 AM, John Fox  wrote:
> Dear Shige,
>
> As documented in ?plot.eff, the default is to plot on the scale of the
> linear predictor (the logit scale, for a logit model), which preserves the
> linearity of the model (which, I would think, is generally desirable), but
> to label the axis on the scale of the response (the probability scale). As
> is also documented, setting rescale.axis=FALSE will plot on the scale of the
> response.
>
> I hope this helps,
>  John
>
> 
> John Fox
> Senator William McMaster
>  Professor of Social Statistics
> Department of Sociology
> McMaster University
> Hamilton, Ontario, Canada
> web: socserv.mcmaster.ca/jfox
>
>
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On
>> Behalf Of Shige Song
>> Sent: November-13-10 10:14 AM
>> To: r-help Help
>> Subject: [R] Question about the "effects" package
>>
>> Dear All,
>>
>> I am using the "effects" package to produce predicted probability from
>> a logistic regression. The graph looks really good. I soon realized
>> that the y-axis is not spaced equally. For example, in my case, the
>> distance between 0.02 and 0.04 is much greater than that between 0.06
>> and 0.08. I can guess a reason for this, but unfortunately, this
>> distorts my story. Is there a way to change this and make the y-axis
>> equally spaced?
>>
>> Many thanks.
>>
>> Best,
>> Shige
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] (no subject)

2010-11-13 Thread Uwe Ligges


 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html

Uwe Ligges



On 13.11.2010 17:18, sundar wrote:



Gud evening sir ,I want do the cluster analysis algorithm in r software
can u guide me sir
My mail id is :sundars...@gmail.com
And I want the brief explanation to for,do.while,if etc loops and
conditions.
Thank you sir.

--

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] clusters in evd package

2010-11-13 Thread dpender

Hi,

I am using the clusters function in the evd package and was wondering if
anyone knew how to set the row names as date/time objects. My data has 1
column of date and time and another of wave height (H).

My two options are:

1 - When in import the data using read.table make sure that the row.names is
set = 1 giving row names to the H data.

2 - Import H from the data file and set the row names using row.names(H) <-
x where x is the data/time data.

The problem is when I try to use the cluster function on either I get the
following error:

> Clusters <- clusters(H, 3.0, r=144, plot=TRUE, cmax=TRUE, row.names=TRUE)

Error in xy.coords(x, y, xlabel, ylabel, log) : 
  (list) object cannot be coerced to type 'double'
In addition: Warning message:
In min(diff(xdata)) : no non-missing arguments to min; returning Inf 

Does anyone know how to set date/time row names in the cluster function?

Thanks,

Doug
-- 
View this message in context: 
http://r.789695.n4.nabble.com/clusters-in-evd-package-tp3041023p3041023.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] (no subject)

2010-11-13 Thread sundar



Gud evening sir ,I want do the cluster analysis algorithm in r software  
can u guide me sir

My mail id is :sundars...@gmail.com
And I want the brief explanation to for,do.while,if etc loops and  
conditions.

Thank you sir.

--

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Efficient marginal sums?

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 10:49 AM, (Ted Harding) wrote:


Hi Folks,
[This is not unrelated to the current "vector of vectors" thread,
but arises quite independently]

Say I have a function f(x,y) which computes a value for scalar
x and y; or, if either x=X or y=Y is a vector, a corresponding
vector of values f(X,y) or f(x,Y) (with the usual built-in
vectorisation of operations).

Now I have X=(x.1,x.2,...,x.m) and Y=(y.1,y.2,...,y.n).
I'm seeking a fast and efficient method to compute (say)

 sum[over elements of Y](f(X,Y))


?sweep

--
David.




returning an m-vector in which, for each x.i in X, I have

 sum(f(x.i,Y))

I know I can do this by constructing matrices say M.X and M.Y

 M.X <- matrix(rep(X,length(Y)),nrow=length(Y),byrow=TRUE)
 M.Y <- matrix(rep(Y,length(X)),nrow=length(Y))

then F <- f(M.X,M.Y), then colSums(F). But that doesn't strike me
as particularly "fast and efficient", because of the preliminary
spadework to make M.X and M.Y.

Is there any much better way? Or some function somewhere that
does it? Something like "marg.sum(X,Y,function=f,margin=2)";
therefore with scope for generalisation to more than 2 variables,
e.g. marg.sum(X,Y,Z,function=f,margins=2)
or   marg.sum(X,Y,Z,function=f,margins=c(2,3))

Or, even more general, like:

 marg.apply(X,Y,Z,fun1=f,fun2=sum,margins=c(2,3))

(Such a question must have been asked before, but I haven't
located it).

With thanks,
Ted.


E-Mail: (Ted Harding) 
Fax-to-email: +44 (0)870 094 0861
Date: 13-Nov-10   Time: 15:49:22
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to turn the colour off for lattice graph?

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 10:07 AM, Shige Song wrote:


Dear All,

I am trying to plot a lattice figure and include it in a LaTeX
document via the TikZDevice package. I think the journal I am
submitting to does not like colour figure, so I need to get rid of all
the colours in the figure. If I directly generate PDF or EPS, the
option "trellis.device(color=FALSE)", but when I do:

trellis.device(color=FALSE)
tikz("~/project/figure.tex")
plot(...)


I wouldn't expect to see output the plot function affected by  
modifications to the trellis device. Have your tried using par()?


(There are plot methods that use lattice plotting functions and I  
suppose tikz might be one but I do not see that function in my loaded  
packages which include lattice by default.)




dev.off()

The resulted graph has black and white dots and lines but still has
the default colour in the window frame. Any ideas about how to get rid
that colour as well?

Many thanks.

Best,
Shige




David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] exploratory analysis of large categorical datasets

2010-11-13 Thread Kjetil Halvorsen
you can also look at correspondence analysis, which is implemented
in multiple CRAN packages, for instance MASS, ade4 and others.
See the multivariate analysis task view on CRAN.

Kjetil

On Thu, Nov 11, 2010 at 10:39 PM, Dennis Murphy  wrote:
> Hi:
>
> A good place to start would be package vcd and its suite of demos and
> vignettes, as well as the vcdExtra package, which adds a few more goodies
> and a very nice introductory vignette by Michael Friendly. You can't fault
> the package for a lack of documentation :)
>
> You might also find the following link useful:  http://www.datavis.ca/R/
> Scroll down to 'vcd and vcdExtra', and further down to 'tableplot', which
> was recently released on CRAN.
>
> HTH,
> Dennis
>
> On Thu, Nov 11, 2010 at 2:09 PM, Lara Poplarski 
> wrote:
>
>> Dear List,
>>
>>
>> I am looking to perform exploratory analyses of two (relatively) large
>> datasets of categorical data. The first one is a binary 80x100 matrix, in
>> the form:
>>
>>
>> matrix(sample(c(0,1),25,replace=TRUE), nrow = 5, ncol=5, dimnames = list(c(
>> "group1", "group2","group3", "group4","group5"), c("V.1", "V.2", "V.3",
>> "V.4", "V.5")))
>>
>>
>> and the second one is a multistate 750x1500 matrix, with up to 15
>> *unordered* states per variable, in the form:
>>
>>
>> matrix(sample(c(1:15),25,replace=TRUE), nrow = 5, ncol=5, dimnames =
>> list(c(
>> "group1", "group2","group3", "group4","group5"), c("V.1", "V.2", "V.3",
>> "V.4", "V.5")))
>>
>>
>> Specifically, I am looking to see which pairs of variables are correlated.
>> For continuos data, I would use cor() and cov() to generate the correlation
>> matrix and the variance-covariance matrix, which I would then visualize
>> with
>> symnum() or image(). However, it is not clear to me whether this approach
>> is
>> suitable for categorical data of this kind.
>>
>>
>> Since I am new to R, I would greatly appreciate any input on how to
>> approach
>> this task and on efficient visualization of the results.
>>
>>
>> Many thanks in advance,
>>
>> Lara
>>
>>        [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Do loop

2010-11-13 Thread sundar
Gud evening sir ,I want do the cluster analysis algorithm in r software can
u guide me sir 

My mail id is :sundars...@gmail.com

And I want the brief explanation to for,do.while,if etc loops and
conditions.

Thank you sir.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Looking for R bloggers in non-English languages

2010-11-13 Thread Tal Galili
Hello everyone,

Today I started a non-English version of R-bloggers, at:
http://www.r-bloggers.com/lang/

R-bloggers.com is a blog aggregator (or a meta-blog) that offers content
about R from 133 bloggers, publishing about 1 to 5 new posts a day.  I am
happy to see over 2700 people had already subscribed to these blogs, and I
 am hoping to now offer a similar service to non-English speaking audience
through this new sub-site.

I am now looking to *invite (non English) bloggers who write about R* to
join the service, but since I don't know how to even start searching for R
bloggers in languages other then English, I thought the best thing to do
would be to ask if one of you (dear members of the R-help mailing list),
might be willing to share with me some links (or even e-mails) of bloggers
that I could invite to join

Bloggers interested in joining (writing in languages other then English) can
join here:
http://www.r-bloggers.com/lang/add-your-blog/
If you write in English, you can join at:
http://www.r-bloggers.com/add-your-blog/

p.s: I know this e-mail advertises a site I have built, but since I see the
service I created to be of real value to the community, I allowed myself to
send this to all of you.

With much respect,
Tal

Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vector

2010-11-13 Thread Uwe Ligges
Looks like we need to fix the spam checker, or maybe it is sufficient to 
train him with some other 1s examples ...

;-)


Thanks for all your work that helps to keep the list clean of spam, Mark!

Best wishes,
Uwe


On 12.11.2010 16:28, Mark Leeds wrote:

that was my bad. I let it in because I saw vector in the subject line, got
lazy and didn't check the contents.


mark




On Fri, Nov 12, 2010 at 5:45 AM, Michael Bedward
wrote:


Fancy that... vector spam :)

Michael

On 12 November 2010 20:30, Jeff Musgrave  wrote:

Now you and your Vector Team can make more money.
Offer your current client base a chance to buy and sell
a product in high demand.  An item that increases in value every day.
Are you ready for this no-inventory, unique product?

Call us at 800.945.3360 to find out more. Don’t let Vector control your

income.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide

http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] wind rose (oz.windrose) scale

2010-11-13 Thread Uwe Ligges
Looking at the code shows that the author forgot to scale the plot 
appropriately at all, since xlim/ylim is hardcoded:


plot(0, xlim = c(-20, 20), ylim = c(-20, 20), type = "n",
axes = FALSE, xlab = "", ylab = "", ...)

Hence you may want to fix it and provide the patch to the package 
maintainer or contact the maintainer (CCing) of the package and ask him 
to fix it.


Best,
Uwe Ligges


On 12.11.2010 11:55, Alejo C.S. wrote:

Dear list,
I trying to make a wind rose plot whit the command oz.windrose, from plotrix
package. My data, a matrix of percentages with the rows representing speed
ranges and the columns indicating wind directions was generated
using bin.wind.records command from same package:

 [,1]  [,2]  [,3]  [,4]  [,5] [,6] [,7]
[1,]  0.4405286 0.000 0.1468429 0.4405286 0.4405286 0.00 0.00
[2,] 30.5433186 3.2305433 3.2305433 4.2584435 3.2305433 1.321586 5.873715
[3,] 23.9353891 2.0558003 0.4405286 1.0279001 0.4405286 0.00 1.321586
[4,]  2.0558003 0.2936858 0.000 0.000 0.000 0.00 0.00
[5,]  0.000 0.000 0.000 0.000 0.000 0.00 0.00
[,8]
[1,]  0.1468429
[2,] 12.9221733
[3,]  2.2026432
[4,]  0.000
[5,]  0.000

The problem is when plotting the wind rose, it is always scaled to 30%. In
my case, the north limb is out of scale in the graph since it represent the
59% of winds.
The command does't have and scale argument, So I can't change the default
scale settings. Any tip?

Thanks in advance

A.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Define a glm object with user-defined coefficients (logistic regression, family="binomial")

2010-11-13 Thread David Winsemius


On Nov 13, 2010, at 7:43 AM, Jürgen Biedermann wrote:


Hi there,

I just don't find the solution on the following problem. :(

Suppose I have a dataframe with two predictor variables (x1,x2) and  
one depend binary variable (y). How is it possible to define a glm  
object (family="binomial") with a user defined logistic function  
like p(y) = exp(a + c1*x1 + c2*x2) where c1,c2 are the coefficents  
which I define. So I would like to do no fitting of the  
coefficients. Still, I would like to define a GLM object because I  
could then easily use other functions which need a glm object as  
argument (e.g. I could use the anova,


The anova results would have not much interpretability in this  
setting. You would be testing for the Intercept being zero under very  
artificial conditions. You have eliminated much statistical meaning by  
forcing the form of the results.



summary functions).


# Assume dataframe name is dfrm with variables event, no_event, x1,  
x2, and further assume c1 and c2 are also defined:


dfrm$logoff <- with(dfrm, log(c1*x1 + c2*x2))
forcedfit <- glm( c(event,no_event) ~ 1 + offset(logoff), data=dfrm)

(Obviously untested.)



Thank you very much! Greetings
Jürgen

--
---
Jürgen Biedermann



David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Question about the "effects" package

2010-11-13 Thread John Fox
Dear Shige,

As documented in ?plot.eff, the default is to plot on the scale of the
linear predictor (the logit scale, for a logit model), which preserves the
linearity of the model (which, I would think, is generally desirable), but
to label the axis on the scale of the response (the probability scale). As
is also documented, setting rescale.axis=FALSE will plot on the scale of the
response.

I hope this helps,
 John


John Fox
Senator William McMaster 
  Professor of Social Statistics
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On
> Behalf Of Shige Song
> Sent: November-13-10 10:14 AM
> To: r-help Help
> Subject: [R] Question about the "effects" package
> 
> Dear All,
> 
> I am using the "effects" package to produce predicted probability from
> a logistic regression. The graph looks really good. I soon realized
> that the y-axis is not spaced equally. For example, in my case, the
> distance between 0.02 and 0.04 is much greater than that between 0.06
> and 0.08. I can guess a reason for this, but unfortunately, this
> distorts my story. Is there a way to change this and make the y-axis
> equally spaced?
> 
> Many thanks.
> 
> Best,
> Shige
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Efficient marginal sums?

2010-11-13 Thread Ted Harding
Hi Folks,
[This is not unrelated to the current "vector of vectors" thread,
 but arises quite independently]

Say I have a function f(x,y) which computes a value for scalar
x and y; or, if either x=X or y=Y is a vector, a corresponding
vector of values f(X,y) or f(x,Y) (with the usual built-in
vectorisation of operations).

Now I have X=(x.1,x.2,...,x.m) and Y=(y.1,y.2,...,y.n).
I'm seeking a fast and efficient method to compute (say)

  sum[over elements of Y](f(X,Y))

returning an m-vector in which, for each x.i in X, I have

  sum(f(x.i,Y))

I know I can do this by constructing matrices say M.X and M.Y

  M.X <- matrix(rep(X,length(Y)),nrow=length(Y),byrow=TRUE)
  M.Y <- matrix(rep(Y,length(X)),nrow=length(Y))

then F <- f(M.X,M.Y), then colSums(F). But that doesn't strike me
as particularly "fast and efficient", because of the preliminary
spadework to make M.X and M.Y.

Is there any much better way? Or some function somewhere that
does it? Something like "marg.sum(X,Y,function=f,margin=2)";
therefore with scope for generalisation to more than 2 variables,
e.g. marg.sum(X,Y,Z,function=f,margins=2)
or   marg.sum(X,Y,Z,function=f,margins=c(2,3))

Or, even more general, like:

  marg.apply(X,Y,Z,fun1=f,fun2=sum,margins=c(2,3))

(Such a question must have been asked before, but I haven't
located it).

With thanks,
Ted.


E-Mail: (Ted Harding) 
Fax-to-email: +44 (0)870 094 0861
Date: 13-Nov-10   Time: 15:49:22
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Consistency of Logistic Regression

2010-11-13 Thread Uwe Ligges



On 12.11.2010 20:11, Marc Schwartz wrote:

You are not creating your data set properly.

Your 'mat' is:


mat

column1 column2
11   0
21   0
30   1
40   0
51   1
61   0
71   0
80   1
90   0
10   1   1


What you really want is:

DF<- data.frame(y = c(1,0,1,0,0,1,0,0,1,1), x = c(5,4,1,6,3,6,5,3,7,9))



Actually it is in general safer to have a factor y rather than numeric y 
for classification tasks.


Best,
Uwe



DF

y x
1  1 5
2  0 4
3  1 1
4  0 6
5  0 3
6  1 6
7  0 5
8  0 3
9  1 7
10 1 9



MOD<- glm(y ~ x, data = DF, family = binomial)



summary(MOD)


Call:
glm(formula = y ~ x, family = binomial, data = DF)

Deviance Residuals:
 Min   1Q   Median   3Q  Max
-1.3353  -1.0229  -0.1239   0.9956   1.7477

Coefficients:
 Estimate Std. Error z value Pr(>|z|)
(Intercept)  -1.6118 1.7833  -0.9040.366
x 0.3293 0.3383   0.9730.330

(Dispersion parameter for binomial family taken to be 1)

 Null deviance: 13.863  on 9  degrees of freedom
Residual deviance: 12.767  on 8  degrees of freedom
AIC: 16.767

Number of Fisher Scoring iterations: 4


HTH,

Marc Schwartz


On Nov 12, 2010, at 12:56 PM, Benjamin Godlove wrote:


I think it is likely I am missing something.  Here is a very simple example:

R code:

mat<- matrix(nrow = 10, ncol = 2, c(1,0,1,0,0,1,0,0,1,1),
c(5,4,1,6,3,6,5,3,7,9), dimnames = list(c(1,2,3,4,5,6,7,8,9,10),
c("column1","column2")))

g<- glm(mat[1:10] ~ mat[11:20], family = binomial (link = logit))

g$converged


SAS code:

data mat;
input col1 col2;
datalines;
1 5
0 4
1 1
0 6
0 3
1 6
0 5
0 3
1 7
1 9
;

proc logistic data=mat descending;
model col1 = col2 / link=logit;
run;

SAS output (in case you don't have access to SAS):
Convergence criterion satisfied

  Estimate   SE
Intercept-1.6118  1.7833
col20.3293  0.3383


Of course, with an example this small, it is not so surprising that the two
methods differ; and they hardly differ by a single S.  But as the datasets
get larger, the difference is more pronounced.  Let me know if you would
like me to send you a large dataset.  I get the feeling I am doing something
wrong in R, so please let me know what you think.

Thank you!

Ben Godlove

On Thu, Nov 11, 2010 at 1:59 PM, Albyn Jones  wrote:


do you have factors (categorical variables) in the model?  it could be
just a parameterization difference.

albyn

On Thu, Nov 11, 2010 at 12:41:03PM -0500, Benjamin Godlove wrote:

Dear R developers,

I have noticed a discrepancy between the coefficients returned by R's

glm()

for logistic regression and SAS's PROC LOGISTIC.  I am using dist =

binomial

and link = logit for both R and SAS.  I believe R uses IRLS whereas SAS

uses

Fisher's scoring, but the difference is something like 100 SE on the
intercept.  What accounts for such a huge difference?

Thank you for your time.

Ben Godlove

  [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide

http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.



--
Albyn Jones
Reed College
jo...@reed.edu




[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] what does SEXP in R internals stand for

2010-11-13 Thread John Fang
Thank you!

Best wishes,
John


2010/11/13 Alexx Hardt 

> Am 13.11.2010 14:50, schrieb John Fang:
>
> Hi all,
>>
>> Is there any one that would give an explanation on the abbreviation SEXP
>> used in R internals to represent a pointer to a data structure?
>>
>> Thanks!
>>
>>
> S-Expression, I believe:
> http://en.wikipedia.org/wiki/S-expression
>
> Best wishes,
>  Alex
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Factor analysis

2010-11-13 Thread Liviu Andronic
On Sat, Nov 13, 2010 at 2:07 PM, Brima  wrote:
>
> Thanks very much. However, I got an error message when I tried.
> What I did is that I created a correlation matrix named dat which is the
> only data I have and tried using the below
>
>> fa<- factanal(covmat = dat, factors=2, rotation="none", scores="none")
> Error in factanal(covmat = dat, factors = 2, rotation = "none", scores =
> "none") :
>  'covmat' is not a valid covariance list
>
Please post the result of
str(dat)

Also, does the following work?
as.matrix(dat)

Liviu


> Please help.
>
> Best regards
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Factor-analysis-tp3040618p3040813.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Do you know how to read?
http://www.alienetworks.com/srtest.cfm
http://goodies.xfce.org/projects/applications/xfce4-dict#speed-reader
Do you know how to write?
http://garbl.home.comcast.net/~garbl/stylemanual/e.htm#e-mail

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Question about the "effects" package

2010-11-13 Thread Shige Song
Dear All,

I am using the "effects" package to produce predicted probability from
a logistic regression. The graph looks really good. I soon realized
that the y-axis is not spaced equally. For example, in my case, the
distance between 0.02 and 0.04 is much greater than that between 0.06
and 0.08. I can guess a reason for this, but unfortunately, this
distorts my story. Is there a way to change this and make the y-axis
equally spaced?

Many thanks.

Best,
Shige

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Updating R packages

2010-11-13 Thread Uwe Ligges



On 13.11.2010 06:15, Jim Silverton wrote:

I have been trying to update some R packages but I get the following error.
Can you advise how mow to get around this . I am using the R for 64 bit
windows.

--- Please select a CRAN mirror for use in this session ---
Warning in install.packages(update[instlib == l, "Package"], l, contriburl =
contriburl,  :
   'lib = "C:/PROGRA~1/R/R-211~1.1-X/library"' is not writable


If you want to install packages into this library mentioend above, you 
need the corrwect permissions to do so. For example, start R with Admin 
privileges or change the permissions of the library.
Or maybe even better: Use another library than the base one to install 
contributed packages into.


Note that you are under R-2.11.1 and if it is the 64-bit version of it: 
CRAN does not support 64-bit R-2.11.x any more: please upgrade to 
R-2.12.0 which comes with bi-arch binaries for both 32- and 64-bit 
operation.


Best,
Uwe Ligges



Error in install.packages(update[instlib == l, "Package"], l, contriburl =
contriburl,  :
   unable to install packages




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to turn the colour off for lattice graph?

2010-11-13 Thread Shige Song
Dear All,

I am trying to plot a lattice figure and include it in a LaTeX
document via the TikZDevice package. I think the journal I am
submitting to does not like colour figure, so I need to get rid of all
the colours in the figure. If I directly generate PDF or EPS, the
option "trellis.device(color=FALSE)", but when I do:

trellis.device(color=FALSE)
tikz("~/project/figure.tex")
plot(...)
dev.off()

The resulted graph has black and white dots and lines but still has
the default colour in the window frame. Any ideas about how to get rid
that colour as well?

Many thanks.

Best,
Shige

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to store a vector of vectors

2010-11-13 Thread Alexx Hardt

Am 13.11.2010 15:48, schrieb Sarah Goslee:

You are assuming that R is using row-major order
for recycling elements, when in fact it is using column-
major order.

It doesn't show up in the first case, because all the
elements of x are identical
   


Oops. Mental note made.


norm<- function(x, y) {
   
+ x<- matrix(x, nrow=nrow(y), ncol=ncol(y), byrow=TRUE)

+ sqrt( rowSums( (x-y)^2 ) )
+ }


Ha, thank you!
I am now using this line:
sqrt( rowSums( t( (x-t(y)) )^2 ))
It seems to work, do you see anything wrong with it?

I will have to read up and play around with those "object" thingys you 
mentioned -- I am still new to R (and OOP, if that matters).


Thanks again,
 Alex

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to store a vector of vectors

2010-11-13 Thread Sarah Goslee
You are assuming that R is using row-major order
for recycling elements, when in fact it is using column-
major order.

It doesn't show up in the first case, because all the
elements of x are identical

> x <- c(1,1,1)
> matrix(x, nrow=2, ncol=3)
 [,1] [,2] [,3]
[1,]111
[2,]111
> x <- c(1,1,2)
> matrix(x, nrow=2, ncol=3)
 [,1] [,2] [,3]
[1,]121
[2,]112

Here's one possible fix. But note that the guts of dist() are
implemented in C, so it will be faster than your solution for
large matrices, regardless of the extra calculations.
(And just to be picky, dist() returns an object of class dist.)

> norm <- function(x, y) {
+ x <- matrix(x, nrow=nrow(y), ncol=ncol(y), byrow=TRUE)
+ sqrt( rowSums( (x-y)^2 ) )
+ }
>
>
> y <- matrix(
+  c(  1,1,1,
+   2,3,4), nrow=2, byrow=TRUE)
> x <- c(1,1,1)
> norm(x, y)
[1] 0.00 3.741657
>
> y <- matrix( c(1,1,2,1,2,2), nrow=2, byrow=TRUE )
> x <- c(1,1,2)
> norm(x,y)
[1] 0 1
>
>

Sarah


On Sat, Nov 13, 2010 at 9:17 AM, Alexx Hardt  wrote:
> Am 13.11.2010 14:39, schrieb Sarah Goslee:
>>
>> I at least would need to see an actual example of your code to
>> be able to answer your question.
>>
>
> My function:
>
> norm <- function(x,y){
>  sqrt( rowSums( (x-y)^2 ) )
> }
>
> y <- matrix(
>      c(  1,1,1,
>           2,3,4), nrow=2, byrow=TRUE)
> x <- c(1,1,1)
>
> Here, norm(x,y) does work. But here:
>
> y <- matrix( c(1,1,2,1,2,2), nrow=2, byrow=TRUE )
> x <- c(1,1,2)
> norm(x,y)
>
> it produces weird numbers:
>
> [1] 1.414214 1.0
>
> which is sqrt(2) and 1. I have no idea what gets mixed up here :-(
>
>> But why not just use dist() and take the appropriate column of the
>> resultant matrix?
>>
>> mydist<- function(x, amat) {
>> # x is the single variable as a vector
>> # amat is the remaining variables as rows
>> alldist<- dist(rbind(x, amat))
>> as.matrix(alldist)[-1,1]
>> }
>>
>
> dist returned a vector for me, and I didn't know how to extract the proper
> elements.
> Also, I'm kind of OCD about wasted computations, which would be the
> distances between elements of y :-)
>
> Thanks,
>  Alex
>

-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] issue with ... in write.fwf in gdata

2010-11-13 Thread Jan Wijffels

Hi Greg,

That's indeed the solution. Thanks for updating the package. I'm looking 
forward to see it on CRAN.

kind regards,
Jan

From: g...@warnes.net
Date: Fri, 12 Nov 2010 14:39:46 -0500
Subject: Re: [R] issue with ... in write.fwf in gdata
To: janwijff...@hotmail.com
CC: r-help@r-project.org

Hi Jan,

The issue isn't that the ... arguments aren't passed on.  Rather, the problem 
is that in the current implementation the ... arguments are passed to format(), 
which doesn't understand the "eol" argument.



The solution is to modify write.fwf() to explicitly accept all of the 
appropriate the arguments for write.table() and to only pass the ... arguments 
to format() and format.info().



I've just modified gdata to make this change, and have submitted the new 
version to CRAN as gdata version 2.8.1.

-Greg

On Fri, Nov 12, 2010 at 7:08 AM, Jan Wijffels  wrote:




Dear R-list



This is just message to inform that the there is an issue with write.fwf in the 
gdata library (from version 2.5.0 on). It does not seem to accept further 
arguments to write.table like "eol" as the help file indicates as it stops when 
executing tmp <- lapply(x, format.info, ...).



Great package though - I use it a lot except for this function :)

See example below.



> require(gdata)

> saveto <- tempfile(pattern = "test.txt", tmpdir = tempdir())

> write.fwf(x = data.frame(a=1:length(LETTERS), b=LETTERS), file=saveto, 
> eol="\r\n")

Error in FUN(X[[1L]], ...) : unused argument(s) (eol = "\r\n")

> sessionInfo()

R version 2.12.0 (2010-10-15)

Platform: x86_64-pc-linux-gnu (64-bit)



locale:

 [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C

 [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8

 [5] LC_MONETARY=C  LC_MESSAGES=en_US.UTF-8

 [7] LC_PAPER=en_US.UTF-8   LC_NAME=C

 [9] LC_ADDRESS=C   LC_TELEPHONE=C

[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C



attached base packages:

[1] stats graphics  grDevices utils datasets  methods   base



other attached packages:

[1] gdata_2.8.0



loaded via a namespace (and not attached):

[1] gtools_2.6.2





kind regards,

Jan





[[alternative HTML version deleted]]



__

R-help@r-project.org mailing list

https://stat.ethz.ch/mailman/listinfo/r-help

PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] using if statment and loops to create data layout of recurrent events

2010-11-13 Thread avsha38

Hi ,
I have a data set with recurrence time (up to four) of myocardial infarction
(MI).
Part of the file is showing below:
Num1Trt Sex TimeT1  T2  T3  T4
10111   1   9   
12110   1   59  
30201   2   14  3
12450   1   18  12  16  
30691   2   26  6   12  13  
20510   1   53  3   15  46  51  

The data consist of the following eight variables: 
Num1 , patient number
Trt, treatment group (1=placebo and 2=drug) 
Sex, Respondent Sex
Time, follow-up time 
T1, T2, T3, and T4, times of the  four potential recurrences of MI. A
patient with only two recurrences has missing values in T3 and T4. 
In the data set, four observations should be created for each patient, one
for each of the four potential MI recurrences. In addition to values of Trt,
and Sex for the patient, each observation contains the following variables: 
ID, patient’s identification (which is the sequence number of the subject) 
Visit, visit number (with value k for the kth potential MI recurrence) 
TStart, time of the (k–1)th recurrence for Visit=k, or the entry time 0 if
VISIT=1, or the follow-up time if the (k–1)th recurrence does not occur 
TStop, time of the  kth recurrence if Visit=k or follow-up time if the  kth
recurrence does not occur 
Status, event status of TStop (1=recurrence and 0=censored) 
For instance, patient # 3 with only one recurrence time at month 3 who was
followed until month 14 will have values for Visit, TStart, TStop, and
Status of (1,0,3,1), (2,3,14,0), (3,14,14,0), and (4,14,14,0), respectively.
If the follow-up time is beyond the time of the fourth MI recurrence, you
must ignore it. Which means that even patient with 4 recurrence times has
only four rows. 
How can I do it with R ?
Any suggestions will be more than welcome.
Avi

-- 
View this message in context: 
http://r.789695.n4.nabble.com/using-if-statment-and-loops-to-create-data-layout-of-recurrent-events-tp3040784p3040784.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Define a glm object with user-defined coefficients (logistic regression, family="binomial")

2010-11-13 Thread Jürgen Biedermann

Hi there,

I just don't find the solution on the following problem. :(

Suppose I have a dataframe with two predictor variables (x1,x2) and one 
depend binary variable (y). How is it possible to define a glm object 
(family="binomial") with a user defined logistic function like p(y) = 
exp(a + c1*x1 + c2*x2) where c1,c2 are the coefficents which I define. 
So I would like to do no fitting of the coefficients. Still, I would 
like to define a GLM object because I could then easily use other 
functions which need a glm object as argument (e.g. I could use the 
anova, summary functions).


Thank you very much! Greetings
Jürgen

--
---
Jürgen Biedermann
Blücherstraße 56
10961 Berlin-Kreuzberg
Mobil: +49 176 247 54 354
Home: +49 30 250 11 713
e-mail: juergen.biederm...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Factor analysis

2010-11-13 Thread Brima

Thanks very much. However, I got an error message when I tried.
What I did is that I created a correlation matrix named dat which is the
only data I have and tried using the below

> fa<- factanal(covmat = dat, factors=2, rotation="none", scores="none")
Error in factanal(covmat = dat, factors = 2, rotation = "none", scores =
"none") : 
  'covmat' is not a valid covariance list 

Please help.

Best regards
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Factor-analysis-tp3040618p3040813.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] what does SEXP in R internals stand for

2010-11-13 Thread Alexx Hardt

Am 13.11.2010 14:50, schrieb John Fang:

Hi all,

Is there any one that would give an explanation on the abbreviation SEXP
used in R internals to represent a pointer to a data structure?

Thanks!
   

S-Expression, I believe:
http://en.wikipedia.org/wiki/S-expression

Best wishes,
 Alex

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to store a vector of vectors

2010-11-13 Thread Alexx Hardt

Am 13.11.2010 14:39, schrieb Sarah Goslee:

I at least would need to see an actual example of your code to
be able to answer your question.
   

My function:

norm <- function(x,y){
 sqrt( rowSums( (x-y)^2 ) )
}

y <- matrix(
  c(  1,1,1,
   2,3,4), nrow=2, byrow=TRUE)
x <- c(1,1,1)

Here, norm(x,y) does work. But here:

y <- matrix( c(1,1,2,1,2,2), nrow=2, byrow=TRUE )
x <- c(1,1,2)
norm(x,y)

it produces weird numbers:

[1] 1.414214 1.0

which is sqrt(2) and 1. I have no idea what gets mixed up here :-(


But why not just use dist() and take the appropriate column of the
resultant matrix?

mydist<- function(x, amat) {
# x is the single variable as a vector
# amat is the remaining variables as rows
alldist<- dist(rbind(x, amat))
as.matrix(alldist)[-1,1]
}
   
dist returned a vector for me, and I didn't know how to extract the 
proper elements.
Also, I'm kind of OCD about wasted computations, which would be the 
distances between elements of y :-)


Thanks,
 Alex

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RMySQL on Windows 2008 64 Bit -Help!

2010-11-13 Thread Spencer Graves

Hello:


  I suggest try R-SIG-DB email list.  They focus specifically on 
databases, and you might get a better response there.



  Sorry I can't help more.
  Spencer


On 11/13/2010 1:12 AM, Santosh Srinivas wrote:

I could do that but will have to change all my code.

It would be great if I could get RMySQL on the 64 bit machine.

-Original Message-
From: Ajay Ohri [mailto:ohri2...@gmail.com]
Sent: 13 November 2010 14:13
To: Santosh Srinivas
Cc: r-help@r-project.org
Subject: Re: [R] RMySQL on Windows 2008 64 Bit -Help!

did you try the RODBC package as well.

Regards

Ajay

Websites-
http://decisionstats.com
http://dudeofdata.com


Linkedin- www.linkedin.com/in/ajayohri





On Sat, Nov 13, 2010 at 9:22 AM, Santosh Srinivas
  wrote:

Dear Group,

I'm having lots of problems getting RMySQL on a 64 bit machine. I followed
all instructions available but couldn't get it working yet! Please help.
See the output below.

I did a install of RMySQL binary from the revolution cran source. It seems
to have unpacked fine but gives this error when I call RMySQL
Error: package 'RMySQL' is not installed for 'arch=x64'


I set this environment variable on the windows path

Sys.getenv('MYSQL_HOME')

   MYSQL_HOME
"C:/Program Files/MySQL/MySQL Server 5.0"


  install.packages('RMySQL',type='source')

trying URL
'http://cran.revolutionanalytics.com/src/contrib/RMySQL_0.7-5.tar.gz'
Content type 'application/x-gzip' length 160769 bytes (157 Kb)
opened URL
downloaded 157 Kb

* installing *source* package 'RMySQL' ...
checking for $MYSQL_HOME... C:/Program Files/MySQL/MySQL Server 5.0
test: Files/MySQL/MySQL: unknown operand
ERROR: configuration failed for package 'RMySQL'
* removing 'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'
* restoring previous
'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'

The downloaded packages are in



'C:\Users\Administrator\AppData\Local\Temp\2\RtmpvGgrzb\downloaded_packages'

Warning message:
In install.packages("RMySQL", type = "source") :
  installation of package 'RMySQL' had non-zero exit status

sessionInfo()

R version 2.11.1 (2010-05-31)
x86_64-pc-mingw32

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
States.1252LC_MONETARY=English_United States.1252 LC_NUMERIC=C

[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] Revobase_4.0.0   RevoScaleR_1.0-0 lattice_0.18-8

loaded via a namespace (and not attached):
[1] grid_2.11.1  tools_2.11.1

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide

http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





--
Spencer Graves, PE, PhD
President and Chief Operating Officer
Structure Inspection and Monitoring, Inc.
751 Emerson Ct.
San José, CA 95126
ph:  408-655-4567

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] what does SEXP in R internals stand for

2010-11-13 Thread John Fang
Hi all,

Is there any one that would give an explanation on the abbreviation SEXP
used in R internals to represent a pointer to a data structure?

Thanks!

Best wishes,
John

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R on an iPad

2010-11-13 Thread Tal Galili
Hi Erin,

I wrote about this half a year ago, I imagine most of the information there
still holds true:
http://www.r-statistics.com/2010/06/could-we-run-a-statistical-analysis-on-iphoneipad-using-r/

Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--




On Sat, Nov 13, 2010 at 4:54 AM, Erin Hodgess wrote:

> Dear R People:
>
> Is it possible to run R on an iPad, please?
>
> For some reason, I'm thinking that you can't have Fortran, C, etc., so
> you can't do it.
>
> But I thought I would check anyway.
>
> Thanks,
> Erin
>
>
> --
> Erin Hodgess
> Associate Professor
> Department of Computer and Mathematical Sciences
> University of Houston - Downtown
> mailto: erinm.hodg...@gmail.com
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to set an argument such that a function treats it as missing?

2010-11-13 Thread Marius Hofert
Ahh, thank you very much, precisely what I was looking for :-)))

Cheers,

Marius

On 2010-11-13, at 12:41 , Duncan Murdoch wrote:

> Marius Hofert wrote:
>> Dear expeRts,
>> I would like to call a function f from a function g with or without an 
>> argument. I use missing() to check if the argument is given. If it is not 
>> given, can I set it to anything such that the following function call (to f) 
>> behaves as if the argument
>> isn't given? It's probably best described by a minimal example (see below).
>> The reason why I want to do this is, that I do not have to distinguish 
>> between the
>> cases when the argument is given or not. By setting it to something (what?) 
>> in the
>> latter case, I can use the same code in the subsequent part of the function.
>> Cheers,
>> Marius
>> f <- function(x) if(missing(x)) print("f: missing x") else print(x)
>> g <- function(x){
>>  if(missing(x)){
>>  print("g: missing x")
>>  x <- NULL # I try to set it to something here such that...
> 
> Just leave out the line above, and you'll get both messages printed:
> 
> > g()
> [1] "g: missing x"
> [1] "f: missing x"
> 
> Duncan Murdoch
> 
>>  }
>>  f(x) # ... this call to f behaves like f() } g() # should print "f: 
>> missing x" (is this possible?)
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to store a vector of vectors

2010-11-13 Thread Duncan Murdoch

Alexx Hardt wrote:

Hi,
I'm trying to write a function to determine the euclidean distance 
between x (one point) and y (a set of n points). How should I pass y to 
the function? Until now, I used a matrix like that:


|  [,1]  [,2]  [,3]
[1,]  0  2  1
[2,]  1  1  1
|

Which would pass the points (0,2,1) and (1,1,1) to that function.

However, when I pass x as a normal (column) vector, the two variables 
don't match in the function. I either have to transpose x or y, or save 
a vector of vectors an other way.


My question: What is the standard way to save more than one vector in R? 
(my matrix y)

Is it just my y transposed or maybe a list or something I don't yet know?


If all vectors are of the same type and length, a matrix is probably 
best.  There are some obscure situations where it is more efficient to 
store them as columns rather than rows, but it rarely makes a detectable 
difference, so you should choose the orientation to match the way you 
plan to use the data.


I don't know how you are constructing x as a column vector.  Normally 
vectors in R are neither columns nor rows: they just have a length.


So if your matrix y is as shown, y[1,] will give a plain vector that 
should be the same shape as x <- c(1,2,3).


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to store a vector of vectors

2010-11-13 Thread Sarah Goslee
I at least would need to see an actual example of your code to
be able to answer your question.

But why not just use dist() and take the appropriate column of the
resultant matrix?

mydist <- function(x, amat) {
# x is the single variable as a vector
# amat is the remaining variables as rows
alldist <- dist(rbind(x, amat))
as.matrix(alldist)[-1,1]
}

Sarah

On Sat, Nov 13, 2010 at 8:22 AM, Alexx Hardt  wrote:
> Hi,
> I'm trying to write a function to determine the euclidean distance
> between x (one point) and y (a set of n points). How should I pass y to
> the function? Until now, I used a matrix like that:
>
> |      [,1]  [,2]  [,3]
> [1,]      0      2      1
> [2,]      1      1      1
> |
>
> Which would pass the points (0,2,1) and (1,1,1) to that function.
>
> However, when I pass x as a normal (column) vector, the two variables
> don't match in the function. I either have to transpose x or y, or save
> a vector of vectors an other way.
>
> My question: What is the standard way to save more than one vector in R?
> (my matrix y)
> Is it just my y transposed or maybe a list or something I don't yet know?
>
> Thanks in advance,
>  Alex
>
>
>        [[alternative HTML version deleted]]
>

-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Data Cube in R from CSV

2010-11-13 Thread clangkamp

Indeed I am looking for a View() type thing. Unfortunately I am no developer
so I can just work with what is already there.
The way forward is then to try everything with a very small cube which is
still overseeable and then just do the same with the big one. But View() is
really not good for larger data cubes.
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Data-Cube-in-R-from-CSV-tp2544404p3040822.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to store a vector of vectors

2010-11-13 Thread Alexx Hardt
Hi,
I'm trying to write a function to determine the euclidean distance 
between x (one point) and y (a set of n points). How should I pass y to 
the function? Until now, I used a matrix like that:

|  [,1]  [,2]  [,3]
[1,]  0  2  1
[2,]  1  1  1
|

Which would pass the points (0,2,1) and (1,1,1) to that function.

However, when I pass x as a normal (column) vector, the two variables 
don't match in the function. I either have to transpose x or y, or save 
a vector of vectors an other way.

My question: What is the standard way to save more than one vector in R? 
(my matrix y)
Is it just my y transposed or maybe a list or something I don't yet know?

Thanks in advance,
  Alex


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Reading location info from kml files

2010-11-13 Thread fbielejec
Dear,
I have a bunch of kml files in the general form: 


  discrete rates with bayes factor larger than 3.0
  
rate1_part1
#rate1_style

  relativeToGround
  1
  -17.,14.75,0.0
-17.3257204,14.737026,1024.9356539689436 
  
  
  ...
  

  


I would like to parse the coordinates and names from them to eventually
have sth of the form: 

-17.,   14.75,  0.0,
rate1_part1, -17.3257204,   14.737026,
1024.9356539689436, rate1_part1,

So far I have only found function getKMLcoordinates (maptools), which
does parse the Latitude an dLongitude but not the location names. What
else can I use here?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] predict.coxph

2010-11-13 Thread Mike Marchywka








> Date: Fri, 12 Nov 2010 16:08:57 -0600
> From: thern...@mayo.edu
> To: james.whan...@gmail.com
> CC: r-help@r-project.org; haenl...@escpeurope.eu
> Subject: Re: [R] predict.coxph
>
> Jim,
> I respectfully disagree, and there is 5 decades of literature to back
> me up. Berkson and Gage (1950) is in response to medical papers that
> summarized surgical outcomes using only the observed deaths, and shows
> important failings of the method. Ignoring the censored cases usually
> gives biased answers, often so badly so that they are misleading and
> worse than no answer at all. The PH model is surprisingly accurate in

( yes I read all the way through and noted your caveats below
but curious about the reality of what you encounter and what would
make sense to consider in the future as better understanding of causality
can remove random events.  )

If you are looking at radioactive decay maybe but how often do
you actually see exponential KM curves in real life? Certainly
depending on MOA of drug or disease/enrollment critera, you could expect 
qualitative
changes in disease trajectory and consequently in survival
curves.  A  trial design
could in fact try to get all the control sample to "event"  at the same
time if enough was known about prognostic factors and natural trajectory
as this should make drug effects quite clear- a step function of course
is not a constant hazard.( now writing a label based on this trial
may annoy the FDA [ " indicated for patients with exactly 6 months of life 
expectancy
based on XYZ paper " LOL ] but from a statistical standpoint would seem like
a good idea to consider to get power with few patients). At minimum, there 
could be
some inital plateau as almost-dead patients may be excluded etc.




> acute disease (I work in areas like multiple myeloma and liver

On the R-related topic, do you know anything about results
with VLA-4 inhibitors in MM?

> transplant so see a lot of this) and is also used in economics (duration
> of unemployment for instance), the accelerated failure time models have
> proven very reliable predictors in industry work. Censored linear
> regression (e.g. "Tobit" model) is not uncommon. I am not aware of any
> cases where ignoring the censored cases gives a competitive answer.

Are you talking about right censored? These points would seem to be
informative as they have survived this long nand simply ignoring them would
create bias. Ceratinly lost to follow
up should be unbiased if just ignored no? Personally I think I finally decided 
that
comparing integral measures may be more helpful- patient-months of excess
survival for example- rather than asking about things like means or
medians.

So basically your conversation is about calculating things like average 
survival time with many data points yet to event? 

> Blindly using a coxph model without checking into or at least thinking
> about the proportional hazards assumption is dangerous, but so is blind
> use of any other model.

As noted above, I wasn't trying to take your earlier statement out of context...

>
> Terry T.
>
> --- Begin included message -
> Terry,
>
> My point was that if you are asking the question: What is the average
> time to death based on a set of variables? The only logical approach for
> calculating actual time to death is to use uncensored cases, because we
> do not know the time to death for the censored cases and can only
> estimate them. While actual time to death for uncensored cases may not
> be a very useful piece of information, it can indeed be calculated.
> However, as you point out predicted values for time to death can be
> estimated using the survival function which incorporates both censored
> and uncensored data. However, the assumption of proportional hazards is
> rarely defensible.
>
> Best,
>
> Jim
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to set an argument such that a function treats it as missing?

2010-11-13 Thread Duncan Murdoch

Marius Hofert wrote:

Dear expeRts,

I would like to call a function f from a function g with or without an argument. 
I use missing() to check if the argument is given. If it is not given, can I set 
it to anything such that the following function call (to f) behaves as if the argument

isn't given? It's probably best described by a minimal example (see below).

The reason why I want to do this is, that I do not have to distinguish between 
the
cases when the argument is given or not. By setting it to something (what?) in 
the
latter case, I can use the same code in the subsequent part of the function.

Cheers,

Marius



f <- function(x) if(missing(x)) print("f: missing x") else print(x)

g <- function(x){
if(missing(x)){
print("g: missing x")
x <- NULL # I try to set it to something here such that...


Just leave out the line above, and you'll get both messages printed:

> g()
[1] "g: missing x"
[1] "f: missing x"

Duncan Murdoch


}
	f(x) # ... this call to f behaves like f() 
} 


g() # should print "f: missing x" (is this possible?)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to set an argument such that a function treats it as missing?

2010-11-13 Thread Gabor Grothendieck
On Sat, Nov 13, 2010 at 3:14 AM, Marius Hofert  wrote:
> Dear expeRts,
>
> I would like to call a function f from a function g with or without an 
> argument.
> I use missing() to check if the argument is given. If it is not given, can I 
> set
> it to anything such that the following function call (to f) behaves as if the 
> argument
> isn't given? It's probably best described by a minimal example (see below).
>
> The reason why I want to do this is, that I do not have to distinguish 
> between the
> cases when the argument is given or not. By setting it to something (what?) 
> in the
> latter case, I can use the same code in the subsequent part of the function.
>

You can pass missing values:

f <- function(x) g(x)
g <- function(x) missing(x)
f(3) # FALSE
f() # TRUE


-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to set an argument such that a function treats it as missing?

2010-11-13 Thread Michael Bedward
Or yet another way which is (I think) a bit closer to your requirement...

f <- function(x) {
  if (missing(x)) cat("x is missing \n")
  else cat("x was provided \n")
}

g <- function(x) {
  if (missing(x))
fcall <- call("f")
  else
fcall <- call("f", x)

  eval(fcall)
}


On 13 November 2010 20:24, Michael Bedward  wrote:
> Hello Marius,
>
> NULL is not the same as missing. You could something like this in
> various ways. Here are a couple...
>
> g <- function(x) {
>  if (missing(x)) {
>    f()
>  } else {
>    f(x)
>  }
> }
>
> or change f to detect null args
>
> g <- function(x) {
>  if (missing(x)) {
>    x <- NULL
>  }
>
>  f(x)
> }
>
> f <- function(x) {
>  if (missing(x) | is.null(x)) {
>    // do something
>  }
> }
>
>
> Michael
>
>
> On 13 November 2010 19:14, Marius Hofert  wrote:
>> Dear expeRts,
>>
>> I would like to call a function f from a function g with or without an 
>> argument.
>> I use missing() to check if the argument is given. If it is not given, can I 
>> set
>> it to anything such that the following function call (to f) behaves as if 
>> the argument
>> isn't given? It's probably best described by a minimal example (see below).
>>
>> The reason why I want to do this is, that I do not have to distinguish 
>> between the
>> cases when the argument is given or not. By setting it to something (what?) 
>> in the
>> latter case, I can use the same code in the subsequent part of the function.
>>
>> Cheers,
>>
>> Marius
>>
>>
>>
>> f <- function(x) if(missing(x)) print("f: missing x") else print(x)
>>
>> g <- function(x){
>>        if(missing(x)){
>>                print("g: missing x")
>>                x <- NULL # I try to set it to something here such that...
>>        }
>>        f(x) # ... this call to f behaves like f()
>> }
>>
>> g() # should print "f: missing x" (is this possible?)
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to set an argument such that a function treats it as missing?

2010-11-13 Thread Michael Bedward
Hello Marius,

NULL is not the same as missing. You could something like this in
various ways. Here are a couple...

g <- function(x) {
  if (missing(x)) {
f()
  } else {
f(x)
  }
}

or change f to detect null args

g <- function(x) {
  if (missing(x)) {
x <- NULL
  }

  f(x)
}

f <- function(x) {
  if (missing(x) | is.null(x)) {
// do something
  }
}


Michael


On 13 November 2010 19:14, Marius Hofert  wrote:
> Dear expeRts,
>
> I would like to call a function f from a function g with or without an 
> argument.
> I use missing() to check if the argument is given. If it is not given, can I 
> set
> it to anything such that the following function call (to f) behaves as if the 
> argument
> isn't given? It's probably best described by a minimal example (see below).
>
> The reason why I want to do this is, that I do not have to distinguish 
> between the
> cases when the argument is given or not. By setting it to something (what?) 
> in the
> latter case, I can use the same code in the subsequent part of the function.
>
> Cheers,
>
> Marius
>
>
>
> f <- function(x) if(missing(x)) print("f: missing x") else print(x)
>
> g <- function(x){
>        if(missing(x)){
>                print("g: missing x")
>                x <- NULL # I try to set it to something here such that...
>        }
>        f(x) # ... this call to f behaves like f()
> }
>
> g() # should print "f: missing x" (is this possible?)
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RMySQL on Windows 2008 64 Bit -Help!

2010-11-13 Thread Santosh Srinivas
I could do that but will have to change all my code.

It would be great if I could get RMySQL on the 64 bit machine.

-Original Message-
From: Ajay Ohri [mailto:ohri2...@gmail.com] 
Sent: 13 November 2010 14:13
To: Santosh Srinivas
Cc: r-help@r-project.org
Subject: Re: [R] RMySQL on Windows 2008 64 Bit -Help!

did you try the RODBC package as well.

Regards

Ajay

Websites-
http://decisionstats.com
http://dudeofdata.com


Linkedin- www.linkedin.com/in/ajayohri





On Sat, Nov 13, 2010 at 9:22 AM, Santosh Srinivas
 wrote:
> Dear Group,
>
> I'm having lots of problems getting RMySQL on a 64 bit machine. I followed
> all instructions available but couldn't get it working yet! Please help.
> See the output below.
>
> I did a install of RMySQL binary from the revolution cran source. It seems
> to have unpacked fine but gives this error when I call RMySQL
> Error: package 'RMySQL' is not installed for 'arch=x64'
>
>
> I set this environment variable on the windows path
>> Sys.getenv('MYSQL_HOME')
>                               MYSQL_HOME
> "C:/Program Files/MySQL/MySQL Server 5.0"
>
>>  install.packages('RMySQL',type='source')
> trying URL
> 'http://cran.revolutionanalytics.com/src/contrib/RMySQL_0.7-5.tar.gz'
> Content type 'application/x-gzip' length 160769 bytes (157 Kb)
> opened URL
> downloaded 157 Kb
>
> * installing *source* package 'RMySQL' ...
> checking for $MYSQL_HOME... C:/Program Files/MySQL/MySQL Server 5.0
> test: Files/MySQL/MySQL: unknown operand
> ERROR: configuration failed for package 'RMySQL'
> * removing 'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'
> * restoring previous
> 'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'
>
> The downloaded packages are in
>
>
'C:\Users\Administrator\AppData\Local\Temp\2\RtmpvGgrzb\downloaded_packages'
> Warning message:
> In install.packages("RMySQL", type = "source") :
>  installation of package 'RMySQL' had non-zero exit status
>> sessionInfo()
> R version 2.11.1 (2010-05-31)
> x86_64-pc-mingw32
>
> locale:
> [1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
> States.1252    LC_MONETARY=English_United States.1252 LC_NUMERIC=C
>
> [5] LC_TIME=English_United States.1252
>
> attached base packages:
> [1] stats     graphics  grDevices utils     datasets  methods   base
>
> other attached packages:
> [1] Revobase_4.0.0   RevoScaleR_1.0-0 lattice_0.18-8
>
> loaded via a namespace (and not attached):
> [1] grid_2.11.1  tools_2.11.1
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RMySQL on Windows 2008 64 Bit -Help!

2010-11-13 Thread Ajay Ohri
did you try the RODBC package as well.

Regards

Ajay

Websites-
http://decisionstats.com
http://dudeofdata.com


Linkedin- www.linkedin.com/in/ajayohri





On Sat, Nov 13, 2010 at 9:22 AM, Santosh Srinivas
 wrote:
> Dear Group,
>
> I'm having lots of problems getting RMySQL on a 64 bit machine. I followed
> all instructions available but couldn't get it working yet! Please help.
> See the output below.
>
> I did a install of RMySQL binary from the revolution cran source. It seems
> to have unpacked fine but gives this error when I call RMySQL
> Error: package 'RMySQL' is not installed for 'arch=x64'
>
>
> I set this environment variable on the windows path
>> Sys.getenv('MYSQL_HOME')
>                               MYSQL_HOME
> "C:/Program Files/MySQL/MySQL Server 5.0"
>
>>  install.packages('RMySQL',type='source')
> trying URL
> 'http://cran.revolutionanalytics.com/src/contrib/RMySQL_0.7-5.tar.gz'
> Content type 'application/x-gzip' length 160769 bytes (157 Kb)
> opened URL
> downloaded 157 Kb
>
> * installing *source* package 'RMySQL' ...
> checking for $MYSQL_HOME... C:/Program Files/MySQL/MySQL Server 5.0
> test: Files/MySQL/MySQL: unknown operand
> ERROR: configuration failed for package 'RMySQL'
> * removing 'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'
> * restoring previous
> 'C:/Revolution/Revo-4.0/RevoEnt64/R-2.11.1/library/RMySQL'
>
> The downloaded packages are in
>
> 'C:\Users\Administrator\AppData\Local\Temp\2\RtmpvGgrzb\downloaded_packages'
> Warning message:
> In install.packages("RMySQL", type = "source") :
>  installation of package 'RMySQL' had non-zero exit status
>> sessionInfo()
> R version 2.11.1 (2010-05-31)
> x86_64-pc-mingw32
>
> locale:
> [1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
> States.1252    LC_MONETARY=English_United States.1252 LC_NUMERIC=C
>
> [5] LC_TIME=English_United States.1252
>
> attached base packages:
> [1] stats     graphics  grDevices utils     datasets  methods   base
>
> other attached packages:
> [1] Revobase_4.0.0   RevoScaleR_1.0-0 lattice_0.18-8
>
> loaded via a namespace (and not attached):
> [1] grid_2.11.1  tools_2.11.1
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to set an argument such that a function treats it as missing?

2010-11-13 Thread Marius Hofert
Dear expeRts,

I would like to call a function f from a function g with or without an 
argument. 
I use missing() to check if the argument is given. If it is not given, can I 
set 
it to anything such that the following function call (to f) behaves as if the 
argument
isn't given? It's probably best described by a minimal example (see below).

The reason why I want to do this is, that I do not have to distinguish between 
the
cases when the argument is given or not. By setting it to something (what?) in 
the
latter case, I can use the same code in the subsequent part of the function.

Cheers,

Marius



f <- function(x) if(missing(x)) print("f: missing x") else print(x)

g <- function(x){
if(missing(x)){
print("g: missing x")
x <- NULL # I try to set it to something here such that...
}
f(x) # ... this call to f behaves like f() 
} 

g() # should print "f: missing x" (is this possible?)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.