"Koen Vermeer" <[EMAIL PROTECTED]> wrote in message 
news:<[EMAIL PROTECTED]>...
 
> -----LARGE SNIP---
> On the other hand, one could of course 'cross-validate' the ROC. For
> example, the ROCs of the several folds could be averaged in some way, or
> the individual tpr/fpr pairs could be cross-validated.
> 
> I would appreciate any comments on this!

The combined terminology XVAL & ROC could imply one 
of several approaches including (I'll assume 10-fold XVAL):

1. 10-fold XVAL yields 10 designs and 10 associated 
   validation set ROCs w.r.t. varying a single parameter
   (see the definition of ROC in my previous posts).
   The "best" design is chosen based on its validation
   set ROC (area under portion of the curve?). 

   The complaint here is that the best design is chosen
   via it's performance on only 10% of the data. It
   would be nice if there were a way to include the
   design set information in an unbiased performance 
   comparison.

   An alternate solution is to use 0.632 bootstrapping
   which weights validation and design set errors w.r.t
   a 0.632:0.368 ratio to obtain an unbiased estimate.

2. 10-fold XVAL yields 10 designs that are combined 
   into an ensemble that weights outputs or decisions
   (usually in a linear or log-linear combination).
   The weights are determined from the 10 individual 
   validation set ROCs to minimize whatever criterion 
   is used above in option 1. A final ROC for the 
   ensemble is obtained using either all of the design  
   and validation data or from independent test set 
   data that was not used for design or validation.
 
   The complaint here is that either all of the data
   that is used to characterize the generalization 
   performance was involved in both the design and 
   validation processes or that the generalization 
   performance is characterized by a small fraction
   of the total data set.

   Again, the solution appears to be bootstrapping.

These results support the bootstrapping suggestion
of Frank Harrell in a previous post. 

Hope this helps. 

Greg
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to