Brian,
Thanks for the insights, i did manage to play around with the sample sizes and
I have got some better results with both ROCR and pROC
Thanks also to Andrija for providing the main code and insights.
Thanks alot
Taby
--- On Thu, 3/17/11, Brian Diggs dig...@ohsu.edu wrote:
Taby,
First, it is better to reply to the whole list (which I have included on
this reply); there is a better chance of someone helping you. Just
because I could help with one aspect does not mean I necessarily can (or
have the time to) help with more.
Further comments are inline below.
Taby,
At the end of your note are you referring to the bootstrap confidence
intervals in the external validation case, i.e., not corrrected for
overfitting? If so you can get that without the bootstrap (e.g., Hmisc
package rcorr.cens function).
You can get bootstrap overfitting-corrected ROC
Hallo,
I modified a code given by Andrija, a contributor in the list to achieve two
objectives:
create 1000 samples from a list of 207 samples with each of the samples
cointaining 20 good and 20 bad. THis i have achievedcalcuate AUC each of the
1000 samples, this i get an error.
Please see
On 3/16/2011 8:04 AM, taby gathoni wrote:
data-data.frame(id=1:(165+42),main_samp$SCORE,
x=rep(c(BAD,GOOD),c(42,165)))
f-function(x) {
+ str.sample-list()
+ for (i in 1:length(levels(x$x)))
+ {
+ str.sample[[i]]-x[x$x==levels(x$x)[i]
,][sample(tapply(x$x,x$x,length)[i],20,rep=T),]
+ }
+
5 matches
Mail list logo