Re: [R] Constrained Optimisation

2011-12-19 Thread Enrico Schumann


This looks much like the kinds of problems that function 'solve.QP' in 
package 'quadprog' can handle. But if you want people to help you, 
follow the posting guide and 'provide commented, minimal, 
self-contained, reproducible code'. You need to show people what you 
tried and how it failed (so provide actual error messages, and do not 
just say 'it creates an error').


Regards,
Enrico

Am 19.12.2011 20:32, schrieb Russell2:

Dear All

I have a constrained optimisation problem, I want to maximise the following
function

t(weights) %*% CovarianceMatrix %*% weights

for the weights,

subject to  constraints on each element within the weights&  the weights
vector  summing to 1.

i.e.

weights = (x1, x2, x3), where x1 is within some given range (a +b, a - b).

I have tried to do this using the optim function in R, setting the minimum&
maximum tolerances as lower and upper bounds but once I set the constraint
of the weights summing to 1 in the function, it creates an error.

I have also looked at the constrOptim&  solveQP packages and have similar
problems with constraint settings.

Thanks in advance.

--
View this message in context: 
http://r.789695.n4.nabble.com/Constrained-Optimisation-tp4215341p4215341.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Enrico Schumann
Lucerne, Switzerland
http://nmof.net/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] column permutation of sparse matrix

2011-12-19 Thread khai
Hi,

I'm very new to working with sparse matrices and would like to know how I
can column permute a sparse matrix. Here is a small example: 

> M1 <-
> spMatrix(nrow=5,ncol=6,i=sample(5,15,replace=TRUE),j=sample(6,15,replace=TRUE),x=round_any(rnorm(15,2),0.001))
> M1
5 x 6 sparse Matrix of class "dgTMatrix"

[1,] 2.983  . 1.6565.003..
[2,].. 2.990. ..
[3,]. 0.592   5.3491.115..
[4,] 1.836  . 2.804. ..
[5,]. 6.961   . . . 1.077

I know I can permute entire columns this way

> M1[,sample(6,6)]
5 x 6 sparse Matrix of class "dgTMatrix"

[1,] 5.003.   .   .   1.6562.983
[2,] . .   .   .   2.990   .
[3,] 1.1150.592  .   .   5.349   .
[4,] . .   .  .   2.804 1.836
[5,] . 6.961   1.077 .  ..

But I would like the new sparse matrix to look like this...where only the
nonzero elements are permuted.

[1,] 1.656  . 5.0032.983..
[2,].. 2.990. ..
[3,].5.3491.1150.592..
[4,] 2.804  . 1.836. ..
[5,]. 1.077   . . . 6.961

Thanks in advance for any advice!

--
View this message in context: 
http://r.789695.n4.nabble.com/column-permutation-of-sparse-matrix-tp4216726p4216726.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] boot.ci: [Error: cannot allocate vector of size 1.5 Gb]

2011-12-19 Thread Vikram Bahure
Dear R users,

I am getting following error while using boot.ci. I have int.inc function
with 2 values. I am generating CI for the sample estimate.

*>   med <- function(x,i) median(x[i])*
*>   b1 <- boot(int.inc,med,2)*
*>   ci.out <- boot.ci(b1,conf = c(0.95),type = c("bca"))*
*Error: cannot allocate vector of size 1.5 Gb*
*
*
It would be very helpful to get some insight on this.

Regards
Vikram

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] replacement has 13665 rows, data has 13664

2011-12-19 Thread Nicole Marie Ford
Hello, all.

I have come across a problem.  Previously, when I recoded my DV, added the 
variable to my dataset and ran the multinom, I was just fine.  But I am doing 
it again and I am getting this error.

R> Poland$trust <- trust
Error in `$<-.data.frame`(`*tmp*`, "trust", value = c(NA, NA, NA, NA,  : 
  replacement has 13665 rows, data has 13664

Do I need to delete a row?  And if so, how would I do this?

Thanks so much.

~Nicole

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] predict risk (% of death) at a certain time point

2011-12-19 Thread David Winsemius


On Dec 19, 2011, at 10:05 PM, Bill Hyman wrote:


Dear all,

Does any one know how to calculate the risk (death rate) at some  
time point (say 5 yrs) based on Kaplan-Meier curve in coxph model in  
R? I know how to fit the survival data with coxph. But need to  
calculate risk at some time point. Many thanks.


Have you looked at either:

?survfit.coxph
?predict.coxph   # with type ="expected"

This seems to be a particularly common question this week. Is there  
some sort of class project that is due before the holidays?



--
David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] predict risk (% of death) at a certain time point

2011-12-19 Thread Bill Hyman
Dear all,

Does any one know how to calculate the risk (death rate) at some time point 
(say 5 yrs) based on Kaplan-Meier curve in coxph model in R? I know how to fit 
the survival data with coxph. But need to calculate risk at some time point. 
Many thanks.

Bill

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to schedule R to run automatically

2011-12-19 Thread Agus MS
I don't know if this code match what you want. You should use tcltk library.
code:
library(tcltk)
z <- function () { cat("Hello you!\n"); .id <<- tcl("after", 1000, z)}
z()

If you want to cancel you can use:
.id <<- tcl("after", 1000, z)
tcl("after", "info", .id)   # To get info about this scheduled task
tcl("after", "cancel", .id) # To cancel the currently scheduled task

Regard,
Agus MS

On Mon, Dec 19, 2011 at 11:10 PM, Kenneth Zhang wrote:

> I'd like to run an R code automatically on Windows 7, say once every 2
> hours. Could anyone tell me how to manage it?
>
> Thanks in advance!
>
> Best regards,
> Ken
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Summing x1 to x6

2011-12-19 Thread Florent D.
>> I tried to be clever by trying get(paste(paste("x", 1:6, sep=""), 
>> collapse="+")) but it didn't work.
Maybe you meant something like this:
sapply(paste("x", 1:6, sep=""), function(x)mean(get(x)))



On Mon, Dec 19, 2011 at 2:30 PM, David Winsemius  wrote:
>
> On Dec 19, 2011, at 11:00 AM, Pablo wrote:
>
>> Suppose I have the following:
>>
>> x1<-as.vector(rnorm(10))
>> x2<-as.vector(rnorm(10))
>> x3<-as.vector(rnorm(10))
>> x4<-as.vector(rnorm(10))
>> x5<-as.vector(rnorm(10))
>> x6<-as.vector(rnorm(10))
>> x7<-as.vector(rnorm(10))
>> x8<-as.vector(rnorm(10))
>> x9<-as.vector(rnorm(10))
>> x10<-as.vector(rnorm(10))
>>
>> I would like the mean of x1 to x6 for each vector position.
>
>
>> apply( data.frame(x1,x2,x3,x4,x5,x6), 1, mean)
>  [1] -0.39321854 -0.92940746 -0.04843466 -0.27764111  0.44058599  0.73512759
>  [7] -0.22232332 -0.89535287 -0.33430655  0.66217526
>> apply( matrix(c(x1,x2,x3,x4,x5,x6),ncol=6), 1, mean)
>  [1] -0.39321854 -0.92940746 -0.04843466 -0.27764111  0.44058599  0.73512759
>  [7] -0.22232332 -0.89535287 -0.33430655  0.66217526
>
> After seeing Jim Holtman's suggestion, it should be clear that rowMeans
> would work just even better than these `apply` solutions when used with this
> "vertical" orientation of the vectors.
>
>
>
>> I would do
>> something else with x7-x10.  These vectors are not currently in a
>> dataframe.
>> I tried to be clever by trying get(paste(paste("x", 1:6, sep=""),
>> collapse="+")) but it didn't work.  Any thoughts are greatly appreciated,
>>
>
> --
>
> David Winsemius, MD
> West Hartford, CT
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Constrained Optimisation

2011-12-19 Thread Florent D.
try the lsei function from the limSolve package.

On Mon, Dec 19, 2011 at 2:32 PM, Russell2  wrote:
> Dear All
>
> I have a constrained optimisation problem, I want to maximise the following
> function
>
> t(weights) %*% CovarianceMatrix %*% weights
>
> for the weights,
>
> subject to  constraints on each element within the weights & the weights
> vector  summing to 1.
>
> i.e.
>
> weights = (x1, x2, x3), where x1 is within some given range (a +b, a - b).
>
> I have tried to do this using the optim function in R, setting the minimum &
> maximum tolerances as lower and upper bounds but once I set the constraint
> of the weights summing to 1 in the function, it creates an error.
>
> I have also looked at the constrOptim & solveQP packages and have similar
> problems with constraint settings.
>
> Thanks in advance.
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Constrained-Optimisation-tp4215341p4215341.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Color2D.matplot uniform color range

2011-12-19 Thread Jim Lemon

On 12/13/2011 02:15 AM, jalfaro wrote:

Hi Jim,
Thanks so much for your help. I have read several of your responses on this
mailing list and they have helped me out quite a bit as I have gotten more
and more used to R.

I am still a little confused here by your response.
I think you understood my requirements correctly.
In your words:
  I want to anchor the extremes of the scales regardless of the values in the
matrix.
  I want 1 color gradient for values between 0 and 1 with a lowerbound cutoff
at 0.5.
  I want a second color gradient for values between 1 and infinity with an
upper bound cutoff at 3.

The function now reads as follows:

make_Q_figure<-function(filename,alias){
  h0<- read.csv(file=filename,head=TRUE,sep=",",row.names=1)
  d =data.matrix(h0)
  cellcolors<-matrix(NA,nrow=20,ncol=20)
  d<- d[ind,ind]
  cellcolors[d>= 1&  d<  3]<- color.scale(c(1,3,d[d>= 1&  d<  3]),
cs1=1,cs2=c(1,0),cs3=c(1,0))[-1:2]
  cellcolors[d<1]<-
color.scale(c(0,1,d[d<1]),cs1=c(0,0,1),cs2=c(0,0,1),cs3=1)[-1:2]
  cellcolors[d>= 2]<-"red"

color2D.matplot(d,cellcolors=cellcolors,show.values=F,na.color="white",axes=FALSE,main=alias,
xlab="",ylab="")
  axis(1,at=0.5:19.5,labels=colnames(d))
  axis(2,at=0.5:19.5,labels=rev(rownames(d)))
}

However when I execute this function I get the following error:
Error in color.scale(c(0, 1, d[d<  1]), cs1 = c(0, 0, 1), cs2 = c(0, 0,  :
   only 0's may be mixed with negative subscripts

Hi Jav,
My fault, the line should read:

color.scale(c(0,1,d[d<1]),cs1=c(0,0,1),cs2=c(0,0,1),cs3=1)[-(1:2)]
 cellcolors[d>= 2]<-"red"

I always forget that the unary minus has precedence over the sequence 
operator.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] On Corrections for Chi-Sq Goodness of Fit Test

2011-12-19 Thread Michael Fuller
TOPIC
My question regards the philosophy behind how R implements corrections to 
chi-square statistical tests. At least in recent versions (I'm using 2.13.1 
(2011-07-08) on OSX 10.6.8.), the chisq.test function applies the Yates 
continuity correction for 2 by 2 contingency tables. But when used as a 
goodness of fit test (GoF, aka likelihood ratio test), chisq.test does not 
appear to implement any corrections for widely recognized problems, such as 
small sample size, non-uniform expected frequencies, and one D.F. 

>From the help page:
"In the goodness-of-fit case simulation is done by random sampling from the 
discrete distribution specified by p, each sample being of size n = sum(x)."

Is the thinking that random sampling completely obviates the need for 
corrections? Wouldn't the same statistical issues still apply (e.g. poor 
continuity approximation with one D.F., problems with non-uniform expected 
frequencies, etc) with random sampling?

Regards,
Mike 

Michael M. Fuller, Ph.D.
Department of Biology
University of New Mexico
Albuquerque, NM

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Polygon

2011-12-19 Thread Jim Lemon

Komine wrote:


Hi everybody,
I have a matrix with 3 columns (Date, MeanArea and SdArea). I want to
draw a figure showing the variable MeanArea in terms of the Date. But
instead to use the variable SdArea as bar error, I want to use “polygon
error”. I use this code but the output does not seem good.
Poly

Hi Komine,
I think you want a dispersion band around y (Mean Area). Without the 
data I really can't tell, but the dispersion function in the plotrix 
package using type="l" and fill= may do what you want.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rendering or raytracing?

2011-12-19 Thread baptiste auguie
Hi,

package planar is concerned with the full electromagnetic problem at
planar interfaces, which is not very useful for raytracing. The cda
package includes a small demo interfacing either rgl or povray (via
system call) to visualize 3D clusters of metallic particles. A more
general interface to povray or pbrt or luxrender etc. would be very
nice to have.

baptiste



On 20 December 2011 05:01, skan  wrote:
> Hi
>
> Then OK, I mean raytracing directly with some R package.
> Or maybe some optics package
> I've seen there exist an optics package called planar but it's only for
> reflection and transmission at planar interfaces.
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/rendering-or-raytracing-tp4213944p4214608.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pls help to print out first row of terms(model) output in example program

2011-12-19 Thread William Dunlap
Does applying formula() to the output of lm() (or terms())
do what you want?
  > d <- data.frame(y=1:10, x1=log(1:10), x2=sqrt(1:10), x3=1/(1:10))
  > formula(lm(formula("y~."), data=d)) # dot expanded
  y ~ x1 + x2 + x3
  > lm(formula("y~."), data=d)$call[[2]] # your original
  formula("y~.")
  > lm(formula("y~."), data=d)$call$formula # a better version of original
  formula("y~.")

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com 

> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
> Behalf Of Paul Johnson
> Sent: Monday, December 19, 2011 1:15 PM
> To: R-help
> Subject: [R] pls help to print out first row of terms(model) output in 
> example program
> 
> Greetings.
> 
> I've written a convenience function for multicollinearity diagnosis.
> I'd like to report to the user the formula that is used in a
> regression.  I get output like this:
> 
> > mcDiagnose(m1)
> [1] "The following auxiliary models are being estimated and returned in a 
> list:"
> [1] "`x1` ~ ."
> formula(fmla)()
> [1] "`x2` ~ ."
> 
> I'd like to fill in the period with the variable names that are in there.
> 
> I know the information is in there, I just need to get it.  The terms
> output for a fitted model has the output at the very top, in the first
> line, above the attributes.  I just want to print out that first line,
> here:
> 
> > terms(m2)
> y ~ log(10 + x1) + poly(x2, 2)
> attr(,"variables")
> list(y, log(10 + x1), poly(x2, 2))
> attr(,"factors")
>  log(10 + x1) poly(x2, 2)
> y   0   0
> log(10 + x1)1   0
> poly(x2, 2) 0   1
> attr(,"term.labels")
> [1] "log(10 + x1)" "poly(x2, 2)"
> [snip]
> 
> In my working example code below , I need the help where I have "##fix
> me fix me##
> 
> 
> ##Paul Johnson
> ## 2011-12-19
> ## mcDiagnose.R
> 
> lmAuxiliary <- function(model){
>   dat <- as.data.frame(model.matrix(model))
>   ## ivnames <- attr(delete.response(terms(model)), "term.labels")
>   ## previous does not work with transforms like poly
>   hasIntercept <- attr(terms(model), "intercept")
> 
>   if (hasIntercept) dat <- dat[ , -1] # removes intercept. assumes
> intercept in column 1
>   ivnames <- colnames(dat)
>   print("The following auxiliary models are being estimated and
> returned in a list:")
> 
>   results <- list()
> 
>   for (i in ivnames) {
> fmla <- paste( "`", i, "`", " ~ ." , sep="");
> print(fmla)
> maux <- lm( formula(fmla), data=dat)
> results[[ i ]] <- maux
> print(maux$call[2])
> ###fix me fix me ##
>   }
>   results
> }
> 
> 
> mcDiagnose <- function(model){
>   auxRegs <- lmAuxiliary(model)
>   auxRsq <- numeric(length=length(auxRegs))
>   j <- 0
>   for ( i in auxRegs ){
> j <- j + 1
> sumry <- summary(i)
> auxRsq[j] <- sumry$r.squared
>   }
>   print("Drum roll please!")
>   print("And your R_j Squareds are (auxiliary Rsq)")
>   print(names(auxRegs))
>   print(auxRsq)
> }
> 
> x1 <- rnorm(100)
> x2 <- rnorm(100)
> y <- rnorm(100)
> 
> m1 <- lm(y~x1+x2)
> 
> mcDiagnose(m1)
> 
> 
> m2 <- lm(y~log(10+x1) + poly(x2,2))
> 
> mcDiagnose(m2)
> 
> --
> Paul E. Johnson
> Professor, Political Science
> 1541 Lilac Lane, Room 504
> University of Kansas
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] pls help to print out first row of terms(model) output in example program

2011-12-19 Thread Paul Johnson
Greetings.

I've written a convenience function for multicollinearity diagnosis.
I'd like to report to the user the formula that is used in a
regression.  I get output like this:

> mcDiagnose(m1)
[1] "The following auxiliary models are being estimated and returned in a list:"
[1] "`x1` ~ ."
formula(fmla)()
[1] "`x2` ~ ."

I'd like to fill in the period with the variable names that are in there.

I know the information is in there, I just need to get it.  The terms
output for a fitted model has the output at the very top, in the first
line, above the attributes.  I just want to print out that first line,
here:

> terms(m2)
y ~ log(10 + x1) + poly(x2, 2)
attr(,"variables")
list(y, log(10 + x1), poly(x2, 2))
attr(,"factors")
 log(10 + x1) poly(x2, 2)
y   0   0
log(10 + x1)1   0
poly(x2, 2) 0   1
attr(,"term.labels")
[1] "log(10 + x1)" "poly(x2, 2)"
[snip]

In my working example code below , I need the help where I have "##fix
me fix me##


##Paul Johnson
## 2011-12-19
## mcDiagnose.R

lmAuxiliary <- function(model){
  dat <- as.data.frame(model.matrix(model))
  ## ivnames <- attr(delete.response(terms(model)), "term.labels")
  ## previous does not work with transforms like poly
  hasIntercept <- attr(terms(model), "intercept")

  if (hasIntercept) dat <- dat[ , -1] # removes intercept. assumes
intercept in column 1
  ivnames <- colnames(dat)
  print("The following auxiliary models are being estimated and
returned in a list:")

  results <- list()

  for (i in ivnames) {
fmla <- paste( "`", i, "`", " ~ ." , sep="");
print(fmla)
maux <- lm( formula(fmla), data=dat)
results[[ i ]] <- maux
print(maux$call[2])
###fix me fix me ##
  }
  results
}


mcDiagnose <- function(model){
  auxRegs <- lmAuxiliary(model)
  auxRsq <- numeric(length=length(auxRegs))
  j <- 0
  for ( i in auxRegs ){
j <- j + 1
sumry <- summary(i)
auxRsq[j] <- sumry$r.squared
  }
  print("Drum roll please!")
  print("And your R_j Squareds are (auxiliary Rsq)")
  print(names(auxRegs))
  print(auxRsq)
}

x1 <- rnorm(100)
x2 <- rnorm(100)
y <- rnorm(100)

m1 <- lm(y~x1+x2)

mcDiagnose(m1)


m2 <- lm(y~log(10+x1) + poly(x2,2))

mcDiagnose(m2)

-- 
Paul E. Johnson
Professor, Political Science
1541 Lilac Lane, Room 504
University of Kansas

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fractal image analysis

2011-12-19 Thread Andrés Aragón
Dear Petr,

Well, It is not R but works well:
Fractal dimension and lacunarity a plugin for ImageJ.

Best regards,

Andrés AM
*
*


2011/12/19 Sarah Goslee 

> Hi,
>
> I've always used FRAGSTATS, but it looks like the SDMTools package will
> do it within R.
>
> Sarah
>
> On Mon, Dec 19, 2011 at 4:56 AM, Petr PIKAL 
> wrote:
> > Dear all
> >
> > I tried to find some packages (or programs) for image analysis and
> > especially fractal dimension image analysis but so far I had not success.
> > It shall be used for particle surface layer analysis from TEM images.
> >
> > Any suggestions?
> >
> > Best regards
> >
> > Petr
>
>
> --
> Sarah Goslee
> http://www.functionaldiversity.org
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nlrob problem

2011-12-19 Thread Pascal A. Niklaus

Dear all,

I am not sure if this mail is for R-help or should be sent to R-devel 
instead, and therefore post to both.


While using nlrob from package 'robustbase', I ran into the following 
problem:


For psi-functions that can become zero (e.g. psi.bisquare), weights in 
the internal call to nls can become zero. Example:


  d <- data.frame(x=1:5,y=c(2,3,5,10,9))
  d.nlrob <- nlrob(y ~ k*x,start=list(k=1),data=d,psi=psi.bisquare)

I think the problem is as follows: After the call to nls, a weighted 
residual vector is calculated by dividing by sqrt(w). This generates 
NaN's for weights of zero, which leads to problems in the subsequent 
call to nls during the next iteration:


for (iiter in 1:maxit) {
  ...
w <- psi(resid/Scale, ...)
  ...
data$w <- sqrt(w)
  ...
out <- nls( ., data=data ... )
  ...
resid <- -residuals(out)/sqrt(w)   # NaN for w=0
  ...
}

I wonder whether this problem shouldn't be dealt with by setting 'w' to 
0 for the NaN cases.


I can get a fit by calling nlrob with na.action=na.exclude, but I'd have 
intuitively assumed that na.action applies to the NAs in the data set 
passed to nlrob, not to the one internally generated by nlrob and passed 
to nls.


The second 'issue' is that the weights are passed as 'w', whereas the 
documentation of 'nls' says weights should be given as 'weights' :


   data: an optional data frame in which to evaluate the variables in
  ‘formula’ and ‘weights’.  Can also be a list or an
  environment, but not a matrix.

I think it would be good to mention in the documentation of 'nls' that 
weights can be passed as 'w' as well.


Pascal Niklaus

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Constrained Optimisation

2011-12-19 Thread Russell2
Dear All

I have a constrained optimisation problem, I want to maximise the following
function

t(weights) %*% CovarianceMatrix %*% weights

for the weights, 

subject to  constraints on each element within the weights & the weights
vector  summing to 1.

i.e.

weights = (x1, x2, x3), where x1 is within some given range (a +b, a - b).

I have tried to do this using the optim function in R, setting the minimum &
maximum tolerances as lower and upper bounds but once I set the constraint
of the weights summing to 1 in the function, it creates an error.

I have also looked at the constrOptim & solveQP packages and have similar
problems with constraint settings. 

Thanks in advance.

--
View this message in context: 
http://r.789695.n4.nabble.com/Constrained-Optimisation-tp4215341p4215341.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Summing x1 to x6

2011-12-19 Thread David Winsemius


On Dec 19, 2011, at 11:00 AM, Pablo wrote:


Suppose I have the following:

x1<-as.vector(rnorm(10))
x2<-as.vector(rnorm(10))
x3<-as.vector(rnorm(10))
x4<-as.vector(rnorm(10))
x5<-as.vector(rnorm(10))
x6<-as.vector(rnorm(10))
x7<-as.vector(rnorm(10))
x8<-as.vector(rnorm(10))
x9<-as.vector(rnorm(10))
x10<-as.vector(rnorm(10))

I would like the mean of x1 to x6 for each vector position.


> apply( data.frame(x1,x2,x3,x4,x5,x6), 1, mean)
 [1] -0.39321854 -0.92940746 -0.04843466 -0.27764111  0.44058599   
0.73512759

 [7] -0.22232332 -0.89535287 -0.33430655  0.66217526
> apply( matrix(c(x1,x2,x3,x4,x5,x6),ncol=6), 1, mean)
 [1] -0.39321854 -0.92940746 -0.04843466 -0.27764111  0.44058599   
0.73512759

 [7] -0.22232332 -0.89535287 -0.33430655  0.66217526

After seeing Jim Holtman's suggestion, it should be clear that  
rowMeans would work just even better than these `apply` solutions when  
used with this "vertical" orientation of the vectors.




I would do
something else with x7-x10.  These vectors are not currently in a  
dataframe.

I tried to be clever by trying get(paste(paste("x", 1:6, sep=""),
collapse="+")) but it didn't work.  Any thoughts are greatly  
appreciated,




--

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to schedule R to run automatically

2011-12-19 Thread jim holtman
That is what the task scheduler is for.  Just create a ".bat" file
that invokes R and schedule it every two hours.  On Win XP it is
Control Panel/Scheduled Tasks.

On Mon, Dec 19, 2011 at 11:10 AM, Kenneth Zhang  wrote:
> I'd like to run an R code automatically on Windows 7, say once every 2
> hours. Could anyone tell me how to manage it?
>
> Thanks in advance!
>
> Best regards,
> Ken
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
Jim Holtman
Data Munger Guru

What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Summing x1 to x6

2011-12-19 Thread jim holtman
Not sure what you mean by mean of each position, so here are two
possibilities: rowMeans and colMeans:

> x1<-as.vector(rnorm(10))
> x2<-as.vector(rnorm(10))
> x3<-as.vector(rnorm(10))
> x4<-as.vector(rnorm(10))
> x5<-as.vector(rnorm(10))
> x6<-as.vector(rnorm(10))
> x7<-as.vector(rnorm(10))
> x8<-as.vector(rnorm(10))
> x9<-as.vector(rnorm(10))
> x10<-as.vector(rnorm(10))
>
> # create matrix
> x1.6 <- rbind(x1, x2, x3, x4, x5, x6)
> rowMeans(x1.6)
x1 x2 x3 x4 x5 x6
-0.7845288  0.3254140 -0.4430038 -0.1533713  0.6074253 -0.1560222
> colMeans(x1.6)
 [1]  0.23480212 -0.08486182 -0.28657007 -0.59151369  0.08995106
0.55138690 -0.22388657
 [8] -0.36354109 -0.2051 -0.13146655
>


On Mon, Dec 19, 2011 at 11:00 AM, Pablo  wrote:
> Suppose I have the following:
>
> x1<-as.vector(rnorm(10))
> x2<-as.vector(rnorm(10))
> x3<-as.vector(rnorm(10))
> x4<-as.vector(rnorm(10))
> x5<-as.vector(rnorm(10))
> x6<-as.vector(rnorm(10))
> x7<-as.vector(rnorm(10))
> x8<-as.vector(rnorm(10))
> x9<-as.vector(rnorm(10))
> x10<-as.vector(rnorm(10))
>
> I would like the mean of x1 to x6 for each vector position.  I would do
> something else with x7-x10.  These vectors are not currently in a dataframe.
> I tried to be clever by trying get(paste(paste("x", 1:6, sep=""),
> collapse="+")) but it didn't work.  Any thoughts are greatly appreciated,
>
> Pablo
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Summing-x1-to-x6-tp4214604p4214604.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
Jim Holtman
Data Munger Guru

What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to schedule R to run automatically

2011-12-19 Thread Kenneth Zhang
I'd like to run an R code automatically on Windows 7, say once every 2
hours. Could anyone tell me how to manage it?

Thanks in advance!

Best regards,
Ken

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rendering or raytracing?

2011-12-19 Thread skan
Hi

Then OK, I mean raytracing directly with some R package.
Or maybe some optics package
I've seen there exist an optics package called planar but it's only for
reflection and transmission at planar interfaces.

--
View this message in context: 
http://r.789695.n4.nabble.com/rendering-or-raytracing-tp4213944p4214608.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Summing x1 to x6

2011-12-19 Thread Pablo
Suppose I have the following:

x1<-as.vector(rnorm(10))
x2<-as.vector(rnorm(10))
x3<-as.vector(rnorm(10))
x4<-as.vector(rnorm(10))
x5<-as.vector(rnorm(10))
x6<-as.vector(rnorm(10))
x7<-as.vector(rnorm(10))
x8<-as.vector(rnorm(10))
x9<-as.vector(rnorm(10))
x10<-as.vector(rnorm(10))

I would like the mean of x1 to x6 for each vector position.  I would do
something else with x7-x10.  These vectors are not currently in a dataframe. 
I tried to be clever by trying get(paste(paste("x", 1:6, sep=""),
collapse="+")) but it didn't work.  Any thoughts are greatly appreciated,

Pablo

--
View this message in context: 
http://r.789695.n4.nabble.com/Summing-x1-to-x6-tp4214604p4214604.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Training parameters for a HMM

2011-12-19 Thread Ingmar Visser
Claus,

Given that you have observed the hidden states they are not really hidden
anymore ... so maybe an HMM is not what you are looking for.

Assuming that you had such super observation powers using something like:

by(obs,hid,table)

will get you the state dependent counts of the categories of y and using
something like:

table(hid[-100],hid[-1])

gives you the transition counts (assuming hid has length -100)

hth, Ingmar

On Mon, Dec 19, 2011 at 7:25 PM, Claus O'Rourke wrote:

> Hi,
> I'm a newbie to the world of HMMs and HMMs in R. I've had a look at
> the hmm package and the RHmm package but I couldn't see anything
> straightforward on how a labelled sequential dataset with observed
> values and underlying states might be used to construct and train a
> HMM based on that data and no pre-computed values for the transition,
> emission or initial state distributions. Does anyone have any excerpts
> of code that could get me moving in the right direction?
>
> To put it another way, lets say that I have the simple HMM topology
> that is shown here:
> http://en.wikipedia.org/wiki/File:HiddenMarkovModel.png
> And I have somehow collected datasets with observations and labelled
> hidden states:
>
> Sequence 1:
> Obs Hid
>  y1X1
>  y2X2
>  y2X2
>  y4X1
>  ...  ...
>  y3X3
>
> ...
>
> Sequence N:
> Obs Hid
>  y2X1
>  y2X2
>  y2X1
>  y4X1
>  ...  ...
>  y4X1
>
> I'm assuming categorial variables for y and x.
>
> I know I really am starting from from scratch here, so I'd be very
> grateful if anyone could point out to me how I could go about
> automatically constructing and parameterizing a HMM for data like this.
>
> Thanks for your patience.
>
> Claus.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Training parameters for a HMM

2011-12-19 Thread Claus O'Rourke
Hi,
I'm a newbie to the world of HMMs and HMMs in R. I've had a look at
the hmm package and the RHmm package but I couldn't see anything
straightforward on how a labelled sequential dataset with observed
values and underlying states might be used to construct and train a
HMM based on that data and no pre-computed values for the transition,
emission or initial state distributions. Does anyone have any excerpts
of code that could get me moving in the right direction?

To put it another way, lets say that I have the simple HMM topology
that is shown here:
http://en.wikipedia.org/wiki/File:HiddenMarkovModel.png
And I have somehow collected datasets with observations and labelled
hidden states:

Sequence 1:
Obs Hid
  y1X1
  y2X2
  y2X2
  y4X1
 ...  ...
  y3X3

...

Sequence N:
Obs Hid
  y2X1
  y2X2
  y2X1
  y4X1
 ...  ...
  y4X1

I'm assuming categorial variables for y and x.

I know I really am starting from from scratch here, so I'd be very
grateful if anyone could point out to me how I could go about
automatically constructing and parameterizing a HMM for data like this.

Thanks for your patience.

Claus.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] problem with tick graph

2011-12-19 Thread Jean V Adams
n.via...@libero.itg wrote on 12/16/2011 05:02:24 AM:

> Dear all,
> I'm having problems with the tick of my graph. I'mpcombining lines and 
> barplot. 
> For my I'm using the function axis combined with the function pretty to 
have 
> more efficient tick, but all my tick (for example, 300 as my max 
> tick and -100 
> as my min tick) are not printed on my graph.
> So I would like to have for the left axis the seq from 0 to 100 (with 0 
and 
> 100 printed on my graph) and for the right axis the seq from -100 to300 
(with 
> -100 and 300 printed on my graph)
> Someone Knows how to get it???
> 
> 
> The code and data are:
>grafico<-{
> 
> pdf(file=file, paper="special",width=30, height=20)
>par(bg="white",font=3,font.axis=3,las=1,cex.axis=2.2, 
mar=c(8,6,8,8)+8)
>barplot(Imp$TassoV, width=10,space=c(0,0.1),legend.text = 
> FALSE,beside=TRUE,
> border="grey40", main="",col="midnightblue",cex.main=2.4,
> axes = FALSE, axisnames =FALSE)
> lab=as.character(pretty(Imp$TassoV))
>axis(side=4,at=pretty(Imp$TassoV),labels=lab,cex.axis=2.4)
> par(new=T)
>chart.TimeSeries(Imp$ValueA, type="l", lwd=8, main="", ylab="", 
> xlab="", date.
> format="%y", 
>col="red3",major.ticks="years",minor.ticks=TRUE, grid.color="grey50", 
grid.
> lty="dotted", cex.axis=2.4,yaxis=FALSE)
> axis(2,at=c(pretty(Imp$ValueA)))
>legend("topleft",c("Livelli mln $ (sc. sx)","Tasso di var. 
(sc.dx)"),col=c
> ("red3", "midnightblue"),bty="n",lwd=4,cex=2.4)
>}
>dev.off()
> 
>   ValueA  ValueA_L  TassoV
> 1995-12-1688.06557   NA  NA
> 1996-12-1688.34237 88.06557   0.3143044
> 1997-12-1657.86447 88.34237 -34.4997529
> 1998-12-16   50.19389 57.86447 -13.2561039
> 1999-12-16   23.06846 50.19389 -54.0412943
> 2000-12-16  45.79965 23.06846  98.5379304
> 2001-12-16  22.35262 45.79965 -51.1947722
> 2002-12-16 66.89223 22.35262 199.2589371
> 2003-12-1689.24867 66.89223  33.4215852
> 2004-12-1677.16459 89.24867 -13.5397854
> 2005-12-16   51.23656 77.16459 -33.6009462
> 2006-12-16   49.51073 51.23656  -3.3683450
> 2007-12-16   90.39337 49.51073  82.5732837
> 2008-12-16   38.84831 90.39337 -57.0230554
> 2009-12-16   14.70859 38.84831 -62.1384086
> 2010-12-16   55.23819 14.70859 275.5505995
> 
> 
> Thanks for your attention


You did well to include example data and code in your query.  You would 
likely have received a more rapid reply if you had made your example 
simpler, leaving out details that are not relevant to your question. Also, 
it is very helpful to the readers of this list if you share your data 
using the output of dput(), so that it is easy to read in, and each column 
is of the appropriate class.

There is no need for you to specify tick marks using the pretty() 
function, as this is what most R graphing functions use by default.  In 
order to extend the limits on the yaxis, use the ylim= argument in the 
graphing functions.  A much simplified version of your graph is created by 
the code below.

Jean


Imp <- structure(list(ValueA = structure(c(88.06557, 88.34237, 57.86447, 
50.19389, 23.06846, 45.79965, 22.35262, 66.89223, 89.24867, 77.16459, 
51.23656, 49.51073, 90.39337, 38.84831, 14.70859, 55.23819), .Dim = c(16L, 

1L), index = structure(c(819093600, 850716000, 882252000, 913788000, 
945324000, 976946400, 1008482400, 1040018400, 1071554400, 1103176800, 
1134712800, 1166248800, 1197784800, 1229407200, 1260943200, 1292479200
), tzone = "", tclass = "Date"), class = c("xts", "zoo"), .indexCLASS = 
"Date", .indexTZ = ""), 
ValueA_L = c(NA, 88.06557, 88.34237, 57.86447, 50.19389, 
23.06846, 45.79965, 22.35262, 66.89223, 89.24867, 77.16459, 
51.23656, 49.51073, 90.39337, 38.84831, 14.70859), TassoV = c(NA, 
0.3143044, -34.4997529, -13.2561039, -54.0412943, 98.5379304, 
-51.1947722, 199.2589371, 33.4215852, -13.5397854, -33.6009462, 
-3.368345, 82.5732837, -57.0230554, -62.1384086, 275.5505995
)), .Names = c("ValueA", "ValueA_L", "TassoV"), row.names = 
c("1995-12-16", 
"1996-12-16", "1997-12-16", "1998-12-16", "1999-12-16", "2000-12-16", 
"2001-12-16", "2002-12-16", "2003-12-16", "2004-12-16", "2005-12-16", 
"2006-12-16", "2007-12-16", "2008-12-16", "2009-12-16", "2010-12-16"
), class = "data.frame")

par(las=1, mar=c(5, 4, 4, 4)+0.1)
barplot(Imp$TassoV, ylim=c(-100, 300), axes=FALSE)
axis(4)
par(new=T, yaxs="i") 
chart.TimeSeries(Imp$ValueA, date.format="%y", ylim=c(0, 100), 
major.ticks="years", xlab="")

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] write.xls dont find the object in function

2011-12-19 Thread Uwe Ligges



On 19.12.2011 13:30, Ronaldo Reis Júnior wrote:

Em 18-12-2011 18:55, Rolf Turner escreveu:

On 19/12/11 04:29, Uwe Ligges wrote:



On 18.12.2011 12:58, Ronaldo Reis Júnior wrote:




Why the write.xls dont find the object a inside a function?


Because at least that part of the function is poorly written.


Surely this should be a fortune!

cheers,

Rolf


Hi all,

the problem in write.xls is in this little function:

df.tobewritten = as.data.frame(get(s[i]))

the get function is locking for objects only in user workspace, with an
object is inside a function the get dont find it. In my example:

test <- function(a){
a <- data.frame(A=c(1,2),B=c(10,11))
write.xls(a,file="a.xls")
}

how I can put the object a visible to userspace environment to write.xls
find it.


You could assign() it to that environment, but working around the 
write.xls() shortcoming by another hack is maybe not the best idea.


Best wishes,
Uwe




Thanks
Ronaldo



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] summary vs anova

2011-12-19 Thread peter dalgaard

On Dec 19, 2011, at 15:09 , Brent Pedersen wrote:

> Hi, I'm sure this is simple, but I haven't been able to find this in TFM,

It's not _that_ simple. You likely need TFtextbook rather than TFM. Most (but 
not all) will go into at least some detail of coding categorical variables 
using dummy variables. 

> [snip]
> 
> I understand (hopefully correctly) that anova() tests by adding each covariate
> to the model in order it is specified in the formula.
> 

Yes. Note, however, that categorical variables cause more than one dummy 
covariate to be added.

> More specific questions are:
> 
> 1) How do the p-values for smokes* in summary(model) relate to the
>   Pr(>F) for smokes in anova

If the last Pr(>F) corresponds to a single-df term, then F=t^2 for that term 
(only), and the p value is the same. If the last Pr(>F)  is for a k-df term, it 
corresponds to simultaneously testing that the corresponding k regression 
coefficients are _all_ zero;  the joint p value can not in general be 
calculated from tests on individual coefficients. However, they at least test 
related hypotheses.  

p values higher up the list in anova() test for hypotheses in models obtained 
after removal of subsequent factors, so are not in general comparable to the t 
tests in summary().

If you use drop1(, test="F") instead of anova(), then you avoid the 
sequential aspect and all 1-df tests correspond to t-tests in the summary table.

> 2) what do the p-values for each of those smokes* mean exactly?

In the default parametrization, they correspond to comparisons between the 
stated level and the reference (first) level of the factor. In different 
contrast parametrizations, the interpretation will differ; the only complete 
advice is that you need to understand the relation between the factor levels 
and the rows of the design matrix.

> 3) the summary above shows the values for diseasestate1 and diseasestate2
>   how can I get the p-value for diseasecontrol? (or, e.g. genderfemale)

You can't. It would correspond to a comparison of that level with itself.

> 
> thanks.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Peter Dalgaard, Professor
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Calculating the probability of an event at time "t" from a Cox model fit

2011-12-19 Thread David Winsemius


On Dec 19, 2011, at 3:34 AM, aajit75 wrote:


Dear R-users,

I would like to determine the probability of event at specific time  
using

cox model fit. On the development sample data I am able to get the
probability of a event at time point(t).
I need probability score of a event at specific time, using scoring  
scoring
dataset which will have only covariates and not the response  
variables.


Here is the sample code:

n = 1000
beta1 = 2; beta2 = -1;
lambdaT = .02 # baseline hazard
lambdaC = .4  # hazard of censoring
x1 = rnorm(n,0)
x2 = rnorm(n,0)

# true event time
T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))
C = rweibull(n, shape=1, scale=lambdaC)   #censoring time
time = pmin(T,C)  #observed time is min of censored and true
event = time==T   # set to 1 if event is observed
dataphr=data.frame(time,event,x1,x2)

library(survival)
fit_coxph <- coxph(Surv(time, event)~ x1 + x2 , method="breslow")

library(peperr)
predictProb.coxph(fit_coxph, Surv(dataphr$time, dataphr$event),  
dataphr,

0.003)

# Using predictProb.coxph function, probability of event at time (t)  
is

estimated for cox fit models,


No , it should not be the probability at time T (which would be  
infinitesimally small at any time point), but rather the probability  
prior to time T. I am puzzled why one would not use predict.coxph with  
type="expected".


?predict.coxph


I want to estimate this probability on scoring
dataset score_data as below with covariate x1 and x2.

Is it possible/ is there any way to get these probabilities? since in
predictProb.coxph function it requires response, which is not  
preseent on

scoring sample.


Raising what would seem to be the obvious question: Why didn't you  
make one? Or use the derivation Surv object?




n = 1000
set.seed(1)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
score_data <- data.frame(x1,x2)


> n = 100
> set.seed(1)
> x1 = rnorm(n,0)
> x2 = rnorm(n,0)
> score_data <- data.frame(x1,x2)
> s.scr <- survival::Surv(rep(2, n),rep(1, n) )
> predictProb.coxph(fit_coxph, s.scr, score_data, 0.003)
Error in model.frame.default(formula = Surv(time, event) ~ x1 + x2) :
  variable lengths differ (found for 'x1')

After getting that model.frame error with the 1 element test set:  
"variable lengths differ (found for 'x1')",  I changed it to the same  
length as in the derivation set and ran:


predictProb.coxph(fit_coxph, Surv(time, event), score_data, 0.003)
[,1]
   [1,] 9.977413e-01
   [2,] 9.879081e-01
   [3,] 9.893564e-01
   [4,] 5.857123e-01
   [5,] 9.550606e-01
   [6,] 9.761398e-01
   [7,] 9.699297e-01
   snipped

So apparently that function will accept a dataframe even though its  
help page says it should be a matrix and it will run if the same  
number of records aare present as in the derivation set. This next  
version also worked and gave identical results, so it appears the Surv  
object is really just a place holder.


 s.scr <- survival::Surv(rep(2, n),rep(0, n) )
> predictProb.coxph(fit_coxph, s.scr, score_data,  0.003)
[,1]
   [1,] 9.977413e-01
   [2,] 9.879081e-01
   [3,] 9.893564e-01
   [4,] 5.857123e-01
   [5,] 9.550606e-01
   snipped

Since I am not a regular user I cannot really answer the question  
about whether a Surv() object and a dataframe of different dimensions  
than the fit object should be valid input to that function.


(Copied the peperr author.)




David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] summary vs anova

2011-12-19 Thread David Winsemius


On Dec 19, 2011, at 9:09 AM, Brent Pedersen wrote:

Hi, I'm sure this is simple, but I haven't been able to find this in  
TFM,

say I have some data in R like this (pasted here:
http://pastebin.com/raw.php?i=sjS9Zkup):


One of the reason this is not in TFM is that these are questions that  
should be available in any first course on regression textbook.





head(df)

   gender age smokes diseaseY
 1 female  65   ever control 0.18
 2 female  77  never control 0.12
 3   male  40 state1 0.11
 4 female  67   ever control 0.20
 5   male  63   ever  state1 0.16
 6 female  26  never  state1 0.13

where unique(disease) == c("control", "state1", "state2")
and unique(smokes) == c("ever", "never", "", "current")

I then fit a linear model like:


model = lm(Y ~ smokes + disease + age + gender, data=df)


And I want to understand the difference between:


print(summary(model))

   Call:
   lm(formula = Y ~ smokes + disease + age + gender, data = df)

   Residuals:
Min   1Q   Median   3Q  Max
   -0.22311 -0.08108 -0.03483  0.05604  0.46507

   Coefficients:
   Estimate Std. Error t value Pr(>|t|)
   (Intercept)0.1206825  0.0521368   2.315   0.0211 *
   smokescurrent  0.0150641  0.066   0.339   0.7348
   smokesever 0.0498764  0.0326254   1.529   0.1271
   smokesnever0.0394109  0.0349142   1.129   0.2597
   diseasestate1  0.0018739  0.0176817   0.106   0.9157
   diseasestate2 -0.0009858  0.0178651  -0.055   0.9560
   age0.0002841  0.0006290   0.452   0.6518
   gendermale 0.1164889  0.0128748   9.048   <2e-16 ***
   ---
   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

   Residual standard error: 0.1257 on 397 degrees of freedom
   Multiple R-squared: 0.1933, Adjusted R-squared: 0.1791
   F-statistic: 13.59 on 7 and 397 DF,  p-value: 8.975e-16

and:


anova(model)

 Analysis of Variance Table

 Response: Y
Df Sum Sq Mean Sq F value  Pr(>F)
 smokes  3 0.1536 0.05120  3.2397 0.02215 *
 disease 2 0.0129 0.00647  0.4096 0.66420
 age 1 0.0431 0.04310  2.7270 0.09946 .
 gender  1 1.2937 1.29373 81.8634 < 2e-16 ***
 Residuals 397 6.2740 0.01580
 ---
 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

I understand (hopefully correctly) that anova() tests by adding each  
covariate

to the model in order it is specified in the formula.




More specific questions are:


All of which are general statistics questions which you are asked to  
post in forums or lists that expect such questions ... and not to r- 
help.




1) How do the p-values for smokes* in summary(model) relate to the
  Pr(>F) for smokes in anova
2) what do the p-values for each of those smokes* mean exactly?
3) the summary above shows the values for diseasestate1 and  
diseasestate2
  how can I get the p-value for diseasecontrol? (or, e.g.  
genderfemale)



^^^
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

---

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] block averaging data frames

2011-12-19 Thread Mathew Brown

Almost except
tapply(x[4:8], x$interval, colMeans)
works but with a larger data frame I have problems, even
tapply(x[4:7], x$interval, colMeans)
gets the "arguments must have the same length" error. But they do! Any 
ideas?


Thanks for your help Jim


On 12/19/2011 2:00 PM, jim holtman wrote:

Does this work for you:


x<- read.table(text = " date time  Voltage LwTempDownelling LwDownwelling 
LwDownwelling_min LwDownwelling_max LwTempUpwelling

+ 1 2011-11-01 00:00:00 2.73244717.30  30.0
   14.0  39.5   17.83
+ 2 2011-11-01 00:10:00 2.73153417.46  15.3
   11.1  24.6   17.95
+ 3 2011-11-01 00:20:00 2.73136817.43  28.7
   24.6  30.7   17.93
+ 4 2011-11-01 00:30:00 2.73070317.36  40.4
   29.8  43.5   17.86
+ 5 2011-11-01 00:40:00 2.72956717.26  41.6
   40.5  42.6   17.76"
+ , header = TRUE
+ )

# convert the time
x$timestamp<- as.POSIXct(paste(x$date, x$time))
# calculate the start of time ranges
start<- trunc(min(x$timestamp), units = 'hour')
# create breakpoints at 30 minutes
breaks<- seq(from = start

+ , to = max(x$timestamp) + 3600  # make sure you have the
last range
+ , by = '30 min'
+ )

# slice up the data by adding index
x$interval<- findInterval(x$timestamp, breaks)

# determine colMeans
newData<- do.call(rbind, tapply(x[4:8], x$interval, colMeans))
newData<- as.data.frame(newData)

# add the time back
newData$timestamp<- breaks[as.integer(rownames(newData))]
newData

   LwTempDownelling LwDownwelling LwDownwelling_min LwDownwelling_max
LwTempUpwelling
1 17.39667  24.7  16.56667 31.60
  17.90333
2 17.31000  41.0  35.15000 43.05
  17.81000
 timestamp
1 2011-11-01 00:00:00
2 2011-11-01 00:30:00




On Mon, Dec 19, 2011 at 4:28 AM, Mathew Brown
  wrote:


Hi there,

This seems like it should be simple. I have a data frame of climate data
sampled every 10 min. I want to average the entire data frame into 30
min values (i.e., one value for each half hour).  Functions like
running.mean give me a moving average but I want to reduce the size of
the entire frame.. Any ideas how? Cheers!

Example of my data

  timestamp  Voltage LwTempDownelling LwDownwelling LwDownwelling_min 
LwDownwelling_max LwTempUpwelling
1 2011-11-01 00:00:00 2.73244717.30  30.0  14.0 
 39.5   17.83
2 2011-11-01 00:10:00 2.73153417.46  15.3  11.1 
 24.6   17.95
3 2011-11-01 00:20:00 2.73136817.43  28.7  24.6 
 30.7   17.93
4 2011-11-01 00:30:00 2.73070317.36  40.4  29.8 
 43.5   17.86
5 2011-11-01 00:40:00 2.72956717.26  41.6  40.5 
 42.6   17.76
6 2011-11-01 00:50:00 2.72897617.16  39.7


-M.B


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] block averaging data frames

2011-12-19 Thread Mathew Brown
Hmm, almost but my real data frame is much larger (128 columns).
The problem is with this line
  newData <- do.call(rbind, tapply(x[4:8], x$interval, colMeans))

when I do

tapply(x[4:8], x$interval, colMeans)

all works but when I do


tapply(x[4:7], x$interval, colMeans)


I get

Error in tapply(x[4:7], x$interval, colMeans) :
   arguments must have same length


and x[2:126] gets the same error...

Not sure why it only works with the 4 columns? Any thoughts?

Thanks for your help Jim.

M


On 12/19/2011 2:00 PM, jim holtman wrote:
> Does this work for you:
>
>> x<- read.table(text = " date time  Voltage LwTempDownelling LwDownwelling 
>> LwDownwelling_min LwDownwelling_max LwTempUpwelling
> + 1 2011-11-01 00:00:00 2.73244717.30  30.0
>14.0  39.5   17.83
> + 2 2011-11-01 00:10:00 2.73153417.46  15.3
>11.1  24.6   17.95
> + 3 2011-11-01 00:20:00 2.73136817.43  28.7
>24.6  30.7   17.93
> + 4 2011-11-01 00:30:00 2.73070317.36  40.4
>29.8  43.5   17.86
> + 5 2011-11-01 00:40:00 2.72956717.26  41.6
>40.5  42.6   17.76"
> + , header = TRUE
> + )
>> # convert the time
>> x$timestamp<- as.POSIXct(paste(x$date, x$time))
>> # calculate the start of time ranges
>> start<- trunc(min(x$timestamp), units = 'hour')
>> # create breakpoints at 30 minutes
>> breaks<- seq(from = start
> + , to = max(x$timestamp) + 3600  # make sure you have the
> last range
> + , by = '30 min'
> + )
>> # slice up the data by adding index
>> x$interval<- findInterval(x$timestamp, breaks)
>>
>> # determine colMeans
>> newData<- do.call(rbind, tapply(x[4:8], x$interval, colMeans))
>> newData<- as.data.frame(newData)
>>
>> # add the time back
>> newData$timestamp<- breaks[as.integer(rownames(newData))]
>> newData
>LwTempDownelling LwDownwelling LwDownwelling_min LwDownwelling_max
> LwTempUpwelling
> 1 17.39667  24.7  16.56667 31.60
>   17.90333
> 2 17.31000  41.0  35.15000 43.05
>   17.81000
>  timestamp
> 1 2011-11-01 00:00:00
> 2 2011-11-01 00:30:00
>>
>
> On Mon, Dec 19, 2011 at 4:28 AM, Mathew Brown
>   wrote:
>>
>> Hi there,
>>
>> This seems like it should be simple. I have a data frame of climate data
>> sampled every 10 min. I want to average the entire data frame into 30
>> min values (i.e., one value for each half hour).  Functions like
>> running.mean give me a moving average but I want to reduce the size of
>> the entire frame.. Any ideas how? Cheers!
>>
>> Example of my data
>>
>>   timestamp  Voltage LwTempDownelling LwDownwelling LwDownwelling_min 
>> LwDownwelling_max LwTempUpwelling
>> 1 2011-11-01 00:00:00 2.73244717.30  30.0  
>> 14.0  39.5   17.83
>> 2 2011-11-01 00:10:00 2.73153417.46  15.3  
>> 11.1  24.6   17.95
>> 3 2011-11-01 00:20:00 2.73136817.43  28.7  
>> 24.6  30.7   17.93
>> 4 2011-11-01 00:30:00 2.73070317.36  40.4  
>> 29.8  43.5   17.86
>> 5 2011-11-01 00:40:00 2.72956717.26  41.6  
>> 40.5  42.6   17.76
>> 6 2011-11-01 00:50:00 2.72897617.16  39.7
>>
>>
>> -M.B
>>
>>
>> [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] summary vs anova

2011-12-19 Thread Brent Pedersen
Hi, I'm sure this is simple, but I haven't been able to find this in TFM,
say I have some data in R like this (pasted here:
http://pastebin.com/raw.php?i=sjS9Zkup):

  > head(df)
gender age smokes diseaseY
  1 female  65   ever control 0.18
  2 female  77  never control 0.12
  3   male  40 state1 0.11
  4 female  67   ever control 0.20
  5   male  63   ever  state1 0.16
  6 female  26  never  state1 0.13

where unique(disease) == c("control", "state1", "state2")
and unique(smokes) == c("ever", "never", "", "current")

I then fit a linear model like:

> model = lm(Y ~ smokes + disease + age + gender, data=df)

And I want to understand the difference between:

> print(summary(model))
Call:
lm(formula = Y ~ smokes + disease + age + gender, data = df)

Residuals:
 Min   1Q   Median   3Q  Max
-0.22311 -0.08108 -0.03483  0.05604  0.46507

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)0.1206825  0.0521368   2.315   0.0211 *
smokescurrent  0.0150641  0.066   0.339   0.7348
smokesever 0.0498764  0.0326254   1.529   0.1271
smokesnever0.0394109  0.0349142   1.129   0.2597
diseasestate1  0.0018739  0.0176817   0.106   0.9157
diseasestate2 -0.0009858  0.0178651  -0.055   0.9560
age0.0002841  0.0006290   0.452   0.6518
gendermale 0.1164889  0.0128748   9.048   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1257 on 397 degrees of freedom
Multiple R-squared: 0.1933, Adjusted R-squared: 0.1791
F-statistic: 13.59 on 7 and 397 DF,  p-value: 8.975e-16


and:

  > anova(model)
  Analysis of Variance Table

  Response: Y
 Df Sum Sq Mean Sq F value  Pr(>F)
  smokes  3 0.1536 0.05120  3.2397 0.02215 *
  disease 2 0.0129 0.00647  0.4096 0.66420
  age 1 0.0431 0.04310  2.7270 0.09946 .
  gender  1 1.2937 1.29373 81.8634 < 2e-16 ***
  Residuals 397 6.2740 0.01580
  ---
  Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

I understand (hopefully correctly) that anova() tests by adding each covariate
to the model in order it is specified in the formula.

More specific questions are:

1) How do the p-values for smokes* in summary(model) relate to the
   Pr(>F) for smokes in anova
2) what do the p-values for each of those smokes* mean exactly?
3) the summary above shows the values for diseasestate1 and diseasestate2
   how can I get the p-value for diseasecontrol? (or, e.g. genderfemale)

thanks.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rendering or raytracing?

2011-12-19 Thread Duncan Murdoch

On 19/12/2011 7:54 AM, skan wrote:

Hello

Is there any package for rendering or raytracing?


The rgl package uses OpenGL for rendering solid objects, but it doesn't 
do raytracing.  A nice addition would be to output a scene in a format 
that some external ray-tracer (e.g. POV-ray) could render.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Trouble converting hourly data into daily data

2011-12-19 Thread Jean V Adams
Sean Baumgarten wrote on 12/14/2011 06:38:08 PM:

> Hello,
> 
> I have a data frame with hourly or sub-hourly weather records that span
> several years, and from that data frame I'm trying to select only the
> records taken closest to noon for each day. Here's what I've done so 
far:
> 
> #Add a column to the data frame showing the difference between noon and 
the
> observation time (I converted time to a 0-1 scale so 0.5 represents 
noon):
> data$Diff_from_noon <- abs(0.5-data$Time)
> 
> #Find the minimum value of "Diff_from_noon" for each Date:
> aggregated <- aggregate(Diff_from_noon ~ Date, data, FUN=min)
> 
> 
> The problem is that the "aggregated" data frame only has two columns: 
Date
> and Diff_from_noon. I can't figure out how to get the columns with the
> actual weather variables to carry over from the original data frame.
> 
> Any suggestions you have would be much appreciated.
> 
> Thanks,
> Sean


You don't provide any example data, so I will use data from R datasets, 
airquality.  After using the aggregate() function to find the minimum Day 
for each Month, merge the resulting data frame with the original data 
frame to see all the columns corresponding to the selected minimums.

> aggregated <- aggregate(Day ~ Month, airquality, FUN=min) 
> aggregated
  Month Day
1 5   1
2 6   1
3 7   1
4 8   1
5 9   1
> merge(aggregated, airquality)
  Month Day Ozone Solar.R Wind Temp
1 5   141 190  7.4   67
2 6   1NA 286  8.6   78
3 7   1   135 269  4.1   84
4 8   139  83  6.9   81
5 9   196 167  6.9   91

For your data, the code would look like this:
aggregated <- aggregate(Diff_from_noon ~ Date, data, FUN=min) 
merge(aggregated, data)

I recommend that you use a name other than "data" for your data frame, 
since data() is a built in R function.

Jean
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Inquiry about matrix pooling

2011-12-19 Thread Jean V Adams
Jana Makedonska wrote on 12/14/2011 07:19:23 PM:

> Dear R-users,
> 
> 
> I am a relatively new R user and I have a simple question. Is there a
> command attached to a R function, which performs matrix pooling?
> 
> 
> Thanks in advance,
> Jana
> 
> -- 
> 
> -- 
> "The world is full of mysteries. Life is one. The curious limitations of
> finite minds are another."(J.B.S. Haldane, The Causes of Evolution)
> 
> Jana Makedonska, M.Sc.
> Ph.D. candidate/ Part-time lecturer
> Department of Anthropology
> College of Arts & Sciences
> State University of New York at Albany
> 1400 Washington Avenue
> 1 Albany, NY
> Office phone: 518-442-4699


What do you mean by "matrix pooling"?  Can you give a simple example using 
small matrices showing what you have to start with and what you would like 
to end up with?

Jean
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] fUnitRoots cannot obtain p.value

2011-12-19 Thread michalseneca
Hi ,

I have issue obtaining p.value in urersTest(x, type = c("DF-GLS",
"P-test"),lag.max = 4,doplot = FALSE)@test$p.value

urersTest(...)@test indicates correct results but upon obtaining p.value as
this the result is NULL. Which seems not to be correct .

Any help on this ?

Mike 

--
View this message in context: 
http://r.789695.n4.nabble.com/fUnitRoots-cannot-obtain-p-value-tp4213585p4213585.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Hausman test for lmer model

2011-12-19 Thread arunkumar1111
Hi

  How to perform hausman test when a model is created using lmer  ?


Also please any one give link to mail it to   r-sig-mixed-models mailing
list. I'm not able to find it

Thanks in advance
   Arun



--
View this message in context: 
http://r.789695.n4.nabble.com/Hausman-test-for-lmer-model-tp4213473p4213473.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] rendering or raytracing?

2011-12-19 Thread skan
Hello

Is there any package for rendering or raytracing?

--
View this message in context: 
http://r.789695.n4.nabble.com/rendering-or-raytracing-tp4213944p4213944.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Label in oblique orientation

2011-12-19 Thread Komine
Hi, 
Thank you David, I resolved the problem with ?text. 
Komine

--
View this message in context: 
http://r.789695.n4.nabble.com/Label-in-oblique-orientation-tp4211944p4213465.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Global model more parsimonious (minor QAICc)

2011-12-19 Thread lincoln
Hi all,


I know this a general question, not specific for any R package, even so I
hope someone may give me his/her opinion on this.
I have a set of 20 candidate models in a binomial GLM. The global model has
52 estimable parameters and sample size is made of about 1500 observations.
The global model seems not to have problems of parameters estimability nor
get troubles with the convergence of these models.
I have run all the models and at the end I get the global model as the more
parsimonious one (least QAICc -> I have set c-hat=1.15 according to goodness
of fit on the global model).
This is the first time it occurs to me and I am somehow confused.
I believe the data set is not that poor with respect to the number of
parameters (it should not be the Friedman paradox) and the only thing it
seems logic (to me) it is that the candidate set of models could be not
adequate or that simply the best approximation to the natural process I am
trying to analyze is made by the global model.
I am not that experienced with modeling and I'd like to get other opinions
from more skilled people.

Cheers

--
View this message in context: 
http://r.789695.n4.nabble.com/Global-model-more-parsimonious-minor-QAICc-tp4213467p4213467.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] block averaging data frames

2011-12-19 Thread jim holtman
Does this work for you:

> x <- read.table(text = " date time  Voltage LwTempDownelling LwDownwelling 
> LwDownwelling_min LwDownwelling_max LwTempUpwelling
+ 1 2011-11-01 00:00:00 2.73244717.30  30.0
  14.0  39.5   17.83
+ 2 2011-11-01 00:10:00 2.73153417.46  15.3
  11.1  24.6   17.95
+ 3 2011-11-01 00:20:00 2.73136817.43  28.7
  24.6  30.7   17.93
+ 4 2011-11-01 00:30:00 2.73070317.36  40.4
  29.8  43.5   17.86
+ 5 2011-11-01 00:40:00 2.72956717.26  41.6
  40.5  42.6   17.76"
+ , header = TRUE
+ )
> # convert the time
> x$timestamp <- as.POSIXct(paste(x$date, x$time))
> # calculate the start of time ranges
> start <- trunc(min(x$timestamp), units = 'hour')
> # create breakpoints at 30 minutes
> breaks <- seq(from = start
+ , to = max(x$timestamp) + 3600  # make sure you have the
last range
+ , by = '30 min'
+ )
> # slice up the data by adding index
> x$interval <- findInterval(x$timestamp, breaks)
>
> # determine colMeans
> newData <- do.call(rbind, tapply(x[4:8], x$interval, colMeans))
> newData <- as.data.frame(newData)
>
> # add the time back
> newData$timestamp <- breaks[as.integer(rownames(newData))]
> newData
  LwTempDownelling LwDownwelling LwDownwelling_min LwDownwelling_max
LwTempUpwelling
1 17.39667  24.7  16.56667 31.60
 17.90333
2 17.31000  41.0  35.15000 43.05
 17.81000
timestamp
1 2011-11-01 00:00:00
2 2011-11-01 00:30:00
>
>


On Mon, Dec 19, 2011 at 4:28 AM, Mathew Brown
 wrote:
>
>
> Hi there,
>
> This seems like it should be simple. I have a data frame of climate data
> sampled every 10 min. I want to average the entire data frame into 30
> min values (i.e., one value for each half hour).  Functions like
> running.mean give me a moving average but I want to reduce the size of
> the entire frame.. Any ideas how? Cheers!
>
> Example of my data
>
>  timestamp  Voltage LwTempDownelling LwDownwelling LwDownwelling_min 
> LwDownwelling_max LwTempUpwelling
> 1 2011-11-01 00:00:00 2.732447            17.30          30.0              
> 14.0              39.5           17.83
> 2 2011-11-01 00:10:00 2.731534            17.46          15.3              
> 11.1              24.6           17.95
> 3 2011-11-01 00:20:00 2.731368            17.43          28.7              
> 24.6              30.7           17.93
> 4 2011-11-01 00:30:00 2.730703            17.36          40.4              
> 29.8              43.5           17.86
> 5 2011-11-01 00:40:00 2.729567            17.26          41.6              
> 40.5              42.6           17.76
> 6 2011-11-01 00:50:00 2.728976            17.16          39.7
>
>
> -M.B
>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
Jim Holtman
Data Munger Guru

What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] write.xls dont find the object in function

2011-12-19 Thread Ronaldo Reis Júnior
Em 18-12-2011 18:55, Rolf Turner escreveu:
> On 19/12/11 04:29, Uwe Ligges wrote:
>>
>>
>> On 18.12.2011 12:58, Ronaldo Reis Júnior wrote:
>
> 
>>> Why the write.xls dont find the object a inside a function?
>>
>> Because at least that part of the function is poorly written.
>
> Surely this should be a fortune!
>
> cheers,
>
> Rolf
>
Hi all,

the problem in write.xls is in this little function:

df.tobewritten = as.data.frame(get(s[i]))

the get function is locking for objects only in user workspace, with an 
object is inside a function the get dont find it. In my example:

test <- function(a){
   a <- data.frame(A=c(1,2),B=c(10,11))
   write.xls(a,file="a.xls")
}

how I can put the object a visible to userspace environment to write.xls 
find it.

Thanks
Ronaldo

-- 
10ª lei - Seu orientador espera que a sua produtividade seja baixa
   inicialmente e esteja acima da média após um ano.

   --Herman, I. P. 2007. Following the law. NATURE, Vol 445, p. 228.

>  Prof. Ronaldo Reis Júnior
|  .''`. UNIMONTES/DBG/Lab. Ecologia Comportamental e Computacional
| : :'  : Campus Universitário Prof. Darcy Ribeiro, Vila Mauricéia
| `. `'` CP: 126, CEP: 39401-089, Montes Claros - MG - Brasil
|   `- Fone: (38) 3229-8192 |ronaldo.r...@unimontes.br
|http://www.ppgcb.unimontes.br/lecc  | LinuxUser#: 205366


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fractal image analysis

2011-12-19 Thread Sarah Goslee
Hi,

I've always used FRAGSTATS, but it looks like the SDMTools package will
do it within R.

Sarah

On Mon, Dec 19, 2011 at 4:56 AM, Petr PIKAL  wrote:
> Dear all
>
> I tried to find some packages (or programs) for image analysis and
> especially fractal dimension image analysis but so far I had not success.
> It shall be used for particle surface layer analysis from TEM images.
>
> Any suggestions?
>
> Best regards
>
> Petr


-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] calculating correlation coefficients on repeated measures

2011-12-19 Thread Sarah Goslee
Hi Keith,

You do need to reorganize your data. cor() will work on any number of variables
as long as they are columns in a matrix or data frame.

There are a lot of ways to reorganize data, of various power and complexity.
Here's one simple way:

> library(ecodist)
> WW_Sample_table <- with(WW_Sample_SI, crosstab(Individual_ID, 
> FeatherPosition, Delta13C))
> WW_Sample_table
 P1 P2 P3 P4 P5 P6P7 P8 P9
WW_08I_01 -18.3 -18.53 -19.55 -20.18 -20.96 -21.08 -21.5 -17.42 -13.18
WW_08I_03 -22.3 -22.20 -22.18 -22.14 -21.55 -20.85 -23.1 -20.75 -20.90
> cor(WW_Sample_table)
   P1 P2 P3 P4 P5 P6 P7 P8 P9
P1  1  1  1  1  1 -1  1  1  1
P2  1  1  1  1  1 -1  1  1  1
P3  1  1  1  1  1 -1  1  1  1
P4  1  1  1  1  1 -1  1  1  1
P5  1  1  1  1  1 -1  1  1  1
P6 -1 -1 -1 -1 -1  1 -1 -1 -1
P7  1  1  1  1  1 -1  1  1  1
P8  1  1  1  1  1 -1  1  1  1
P9  1  1  1  1  1 -1  1  1  1

(With only two values, the correlation table is rather useless, but enough to
give the idea.)

However, cor.test() is what you'd need for significance testing, and it only
works on one pair of variables at a time. It's still easier to put them into
separate columns.

> WW_Sample_table <- data.frame(WW_Sample_table)
> with(WW_Sample_table, cor.test(P1, P2))

Sarah

On Mon, Dec 19, 2011 at 1:23 AM, Keith Larson  wrote:
> Dear list,
>
> I have 9 repeated measures (measurement variable ==  'Delta13C') for
> individuals (ID variable == 'Individual_ID'. Each repeated measure is
> "indexed" (right term?) by the variable 'FeatherPosition' and given as
> c('P1', 'P2', 'P3', 'P4', 'P5', 'P6', 'P7', 'P8', 'P9'). I would like
> to calculate a correlation coefficient (r) and p.value for all
> measures of 'Delta13C' by individual. the function 'cor' only seems to
> work when comparing two individual measures (e.g. P1 and P2, P2 and
> P3, etc.) and only if I restructure my table. Any suggestions:
>
> In SAS with 'proc corr' I would like results that look like:
>
> Individual ID, r, p
> WW_08I_01,-0.03,0.94
> WW_08I_03,0.53,0.14
>
> Trying to get started in R!
> Keith
>
> Sample dataset:
>
> WW_Sample_SI <-
> structure(list(Individual_ID = structure(c(1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("WW_08I_01",
> "WW_08I_03"), class = "factor"), FeatherPosition = structure(c(1L,
> 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L,
> 9L), .Label = c("P1", "P2", "P3", "P4", "P5", "P6", "P7", "P8",
> "P9"), class = "factor"), Delta13C = c(-18.3, -18.53, -19.55,
> -20.18, -20.96, -21.08, -21.5, -17.42, -13.18, -22.3, -22.2,
> -22.18, -22.14, -21.55, -20.85, -23.1, -20.75, -20.9)), .Names =
> c("Individual_ID",
> "FeatherPosition", "Delta13C"), class = "data.frame", row.names = c("1",
> "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13",
> "14", "15", "16", "17", "18"))
>

-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] None-linear equality constrained optimisation problems

2011-12-19 Thread Berend Hasselman

Michael Griffiths wrote
> 
> Dear R users,
> 
> I have a problem. I would like to solve the following:
> I have
> 
> pL = 1/(1+e^(-b0+b1))
> pM = 1/(1+e^(-b0))
> pH = 1/(1+e^(-b0-b1))
> 
> My target function is
> 
> TF= mean(pL,pM,pH) which must equal 0.5%
> 
> My non-linear constraint is
> 
> nl.Const = 1-(pM/pH), which must equal 20%, and would like the values of
> both b0 and b1 where these conditions are met.
> 
> I have searched widely for an answer, and did think that Rdonlp2 would
> suffice, only to find it no longer supported. I have solved this using
> Excel's solver function, however, because of the non-linear constraint I
> am
> having problems finding the solution in R.
> 
> Can the community suggest another method by which this might be solved?
> 

>From your description I gather that this is actually a problem of solving a
system of non-linear equations.
So you could for example use package nleqslv. As follows.

library(nleqslv)

f <- function(x) {
b0 <- x[1]
b1 <- x[2]

pL <- 1/(1+exp(-b0+b1)) 
pM <- 1/(1+exp(-b0)) 
pH <- 1/(1+exp(-b0-b1)) 

CR <-  (1 - pM/pH) - 0.2
TF <-  mean(pL,pM,pH) - .005## which must equal 0.5% 

c(CR,TF)
}
 
bstart <- c(.5,.5)
nleqslv(bstart ,f, control=list(trace=0))

with output (excerpt)

$x
[1] -5.0685870  0.2247176
$fvec
[1] 1.337569e-09 8.879125e-10

bstart <- c(.85,.85)
nleqslv(bstart ,f, control=list(trace=0))

with output (excerpt)

$x
[1] 1.384720 6.678025
$fvec
[1]  1.402461e-11 -3.595026e-11

So your solution for b0 and b1  depends heavily on the starting values.

BTW: your non-linear constraint is not really non-linear.
1-pM/pH=.2 is equivalent to .8 * pH = pM which is linear.


Berend


--
View this message in context: 
http://r.789695.n4.nabble.com/None-linear-equality-constrained-optimisation-problems-tp4213442p4213731.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] fractal image analysis

2011-12-19 Thread Petr PIKAL
Dear all

I tried to find some packages (or programs) for image analysis and 
especially fractal dimension image analysis but so far I had not success. 
It shall be used for particle surface layer analysis from TEM images.

Any suggestions?

Best regards

Petr

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Mlogit missing value problem

2011-12-19 Thread K. Elo

Hi,

19.12.2011 11:12, Ville Iiskola wrote:


Error in if (abs(x - oldx)<  ftol) { : missing value where TRUE/FALSE
needed

The reason for this error is in the row 563. There the choice has
value 1 and Ie has missing value. If the choice has value 0 and Ie
has missing values, then there is no errors.

What should i do to make it work..?


How about excluding all rows having the combination "choice=1" and 
"Ie=NA" by using the function 'subset'? Or would this destroy/skew your 
data?


HTH,
Kimmo

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Boot: Confidence Interval

2011-12-19 Thread Prof Brian Ripley

On 19/12/2011 09:12, Vikram Bahure wrote:

Dear R users,

I have a very simple query.

I am using bootstrap (library(boot)). I want to access the std. error in
the results and further generate confidence interval(CI) for n no. of
samples which will give me n values for std. error and CI.
*
*
b1

ORDINARY NONPARAMETRIC BOOTSTRAP

Call:
boot(data = int.inc, statistic = med, R = 1000)
Bootstrap Statistics :
 original biasstd. error
t1* 9245.998 -0.285591243.00874

It would be very helpful for if you could suggest me how to access std.
error in bootstrap or generate confidence interval with some other method.


For confidence intervals, see ?boot.ci
For the std.error, read boot:::print.boot, and note that this is just 
the standard deviation of the non-NA samples.


Note that 'boot' is support software for a book: please consult it if 
you need any further help.




Regards
Vikram

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


PLEASE do as we ask -- e.g. no HTML.

--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] block averaging data frames

2011-12-19 Thread Mathew Brown


Hi there,

This seems like it should be simple. I have a data frame of climate data 
sampled every 10 min. I want to average the entire data frame into 30 
min values (i.e., one value for each half hour).  Functions like 
running.mean give me a moving average but I want to reduce the size of 
the entire frame.. Any ideas how? Cheers!

Example of my data

  timestamp  Voltage LwTempDownelling LwDownwelling LwDownwelling_min 
LwDownwelling_max LwTempUpwelling
1 2011-11-01 00:00:00 2.73244717.30  30.0  14.0 
 39.5   17.83
2 2011-11-01 00:10:00 2.73153417.46  15.3  11.1 
 24.6   17.95
3 2011-11-01 00:20:00 2.73136817.43  28.7  24.6 
 30.7   17.93
4 2011-11-01 00:30:00 2.73070317.36  40.4  29.8 
 43.5   17.86
5 2011-11-01 00:40:00 2.72956717.26  41.6  40.5 
 42.6   17.76
6 2011-11-01 00:50:00 2.72897617.16  39.7


-M.B


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Mlogit missing value problem

2011-12-19 Thread Ville Iiskola
Hi

After good advices i post this problem here again. Now the attached sambledata 
is in csv form and it is a part of my real data. I have tried different 
na.action operations but the result stays the same...

My code

> library(mlogit)
> library(foreign)
> z<-odbcConnectExcel("D:\\SAMBLEDATA.csv")
> y<-sqlFetch(z,"Taul1")
> x=mlogit.data(y,choice="voittaja",shape="long",id.var="id",alt.var="numero")
> summary(mlogit(voittaja ~ Fa+Ie -1 , data=x, na.action=na.pass))

Error in if (abs(x - oldx) < ftol) { :
missing value where TRUE/FALSE needed

The reason for this error is in the row 563. There the choice has value 1 
and Ie has missing value. 
If the choice has value 0 and Ie has missing values, then there is no errors.

What should i do to make it work..?

Ville
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] None-linear equality constrained optimisation problems

2011-12-19 Thread Michael Griffiths
Dear R users,

I have a problem. I would like to solve the following:

I have

pL = 1/(1+e^(-b0+b1))

pM = 1/(1+e^(-b0))

pH = 1/(1+e^(-b0-b1))

My target function is

TF= mean(pL,pM,pH) which must equal 0.5%

My non-linear constraint is

nl.Const = 1-(pM/pH), which must equal 20%, and would like the values of
both b0 and b1 where these conditions are met.

I have searched widely for an answer, and did think that Rdonlp2 would
suffice, only to find it no longer supported. I have solved this using
Excel's solver function, however, because of the non-linear constraint I am
having problems finding the solution in R.

Can the community suggest another method by which this might be solved?

As ever, many thanks for your time.

Mike Griffiths


-- 

*Michael Griffiths, Ph.D
*Statistician

*Upstream Systems*

8th Floor
Portland House
Bressenden Place
SW1E 5BH



Tel   +44 (0) 20 7869 5147
Fax  +44 207 290 1321
Mob +44 789 4944 145

www.upstreamsystems.com

*griffi...@upstreamsystems.com *



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Boot: Confidence Interval

2011-12-19 Thread Vikram Bahure
Dear R users,

I have a very simple query.

I am using bootstrap (library(boot)). I want to access the std. error in
the results and further generate confidence interval(CI) for n no. of
samples which will give me n values for std. error and CI.
*
*
b1

ORDINARY NONPARAMETRIC BOOTSTRAP

Call:
boot(data = int.inc, statistic = med, R = 1000)
Bootstrap Statistics :
original biasstd. error
t1* 9245.998 -0.285591243.00874

It would be very helpful for if you could suggest me how to access std.
error in bootstrap or generate confidence interval with some other method.

Regards
Vikram

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Calculating the probability of an event at time "t" from a Cox model fit

2011-12-19 Thread aajit75
Dear R-users,

I would like to determine the probability of event at specific time using
cox model fit. On the development sample data I am able to get the
probability of a event at time point(t). 
I need probability score of a event at specific time, using scoring scoring
dataset which will have only covariates and not the response variables.

Here is the sample code:

n = 1000
beta1 = 2; beta2 = -1; 
lambdaT = .02 # baseline hazard
lambdaC = .4  # hazard of censoring
x1 = rnorm(n,0)
x2 = rnorm(n,0)

# true event time
T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) 
C = rweibull(n, shape=1, scale=lambdaC)   #censoring time
time = pmin(T,C)  #observed time is min of censored and true
event = time==T   # set to 1 if event is observed
dataphr=data.frame(time,event,x1,x2)

library(survival)
fit_coxph <- coxph(Surv(time, event)~ x1 + x2 , method="breslow")

library(peperr)
predictProb.coxph(fit_coxph, Surv(dataphr$time, dataphr$event), dataphr,
0.003)

# Using predictProb.coxph function, probability of event at time (t) is
estimated for cox fit models, I want to estimate this probability on scoring
dataset score_data as below with covariate x1 and x2. 

Is it possible/ is there any way to get these probabilities? since in
predictProb.coxph function it requires response, which is not preseent on
scoring sample.

n = 1
set.seed(1)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
score_data <- data.frame(x1,x2)


Thanks in advance!!
~ Ajit

--
View this message in context: 
http://r.789695.n4.nabble.com/Calculating-the-probability-of-an-event-at-time-t-from-a-Cox-model-fit-tp4213318p4213318.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] package.skeleton()

2011-12-19 Thread steven mosher
Here ben

 I have a tutorial on how to do it

http://stevemosher.wordpress.com/step-8-the-sample-package/



On Sun, Dec 18, 2011 at 11:49 PM, Petr PIKAL  wrote:

> Hi
>
> If I remember correctly I do
>
> start clear R -vanilla session
> copy my functions to it.
> run package.skeleton("some.name")
>
> which creates the some.name directory.
>
> Regards
> Petr
>
> > Hi Duncan,
> >
> > Thanks for your response.  That's the confusing thing, I didn't receive
> a
> > message and I can't seem to find the new source package directory
> > anywhere.  I would greatly appreciate any advice about what I might be
> > doing wrong.
> >
> > Happy Holidays,
> >
> > Ben
> >
> > On Fri, Dec 16, 2011 at 7:12 PM, Duncan Murdoch
> wrote:
> >
> > > On 11-12-16 4:12 PM, Ben Ganzfried wrote:
> > >
> > >> Hi--
> > >>
> > >> I'm creating an R package, I've read through "Writing R Extensions"
> and
> > >> the
> > >> package.skeleton() R page-- and I'm still running into a little
> confusion.
> > >> I would greatly appreciate any advice you can provide.
> > >>
> > >> Where do I run my following line of code from?:
> > >>
> > >>> package.skeleton(name = "a", code_files = "EsetObject.r"
> > >>>
> > >>
> > >> I'm currently running it from Rgui, but when I type the line above
> nothing
> > >> happens.
> > >>
> > >>
> > > What is supposed to happen is that a new source package directory will
> be
> > > created in the current directory (what getwd() gives you).  Did that
> > > happen?  (I thought a message would also be printed and apparently
> you're
> > > not seeing that, but maybe it just worked without telling you.)
> > >
> > > Duncan Murdoch
> > >
> > >  Thank you very much.
> > >>
> > >> Ben
> > >>
> > >>[[alternative HTML version deleted]]
> > >>
> > >> __**
> > >> R-help@r-project.org mailing list
> > >> https://stat.ethz.ch/mailman/**listinfo/r-help > mailman/listinfo/r-help>
> > >> PLEASE do read the posting guide http://www.R-project.org/**
> > >> posting-guide.html 
> > >> and provide commented, minimal, self-contained, reproducible code.
> > >>
> > >
> > >
> >
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.