On Sun, Oct 2, 2011 at 11:35 AM, Olivier Grisel wrote:
>
> > 100 pairs: avg=0.425, std=0.349106001094
> > 1000 pairs: avg=0.4725, std=0.354250970359
> > 1 pairs:avg=0.48235, std=0.352155473477
> >
> > So, it is pretty clear to me that what I have here is either not the
> right
> > features bui
2011/10/2 mathieu lacage :
>
>
> On Sat, Oct 1, 2011 at 2:48 PM, Alexandre Gramfort
> wrote:
>>
>> average the ROC curves across folds (train / test splits) is a way:
>>
>> http://scikit-learn.sourceforge.net/auto_examples/plot_roc_crossval.html
>>
>> then you can compare the mean ROC curves for t
On Sat, Oct 1, 2011 at 2:48 PM, Alexandre Gramfort <
alexandre.gramf...@inria.fr> wrote:
>
> average the ROC curves across folds (train / test splits) is a way:
>
> http://scikit-learn.sourceforge.net/auto_examples/plot_roc_crossval.html
>
> then you can compare the mean ROC curves for the differe
2011/10/1 mathieu lacage :
> hi,
>
> I am looking for advice on how to pick a classifier among n competing
> classifiers when they are evaluated on more than a single training/test data
> set. i.e., I would like to compare, for each classifier, the set of roc
> curves that are generated from each t
Hi Mathieu,
average the ROC curves across folds (train / test splits) is a way:
http://scikit-learn.sourceforge.net/auto_examples/plot_roc_crossval.html
then you can compare the mean ROC curves for the different algorithms.
Just be careful not to estimate the model parameters using the test set
hi,
I am looking for advice on how to pick a classifier among n competing
classifiers when they are evaluated on more than a single training/test data
set. i.e., I would like to compare, for each classifier, the set of roc
curves that are generated from each training/test data set. Is there an
es