Hi Vincent, thanks for pointing me to this! Looks like a great resource.
I'll be on the lookout for the searchlight code, but this looks quite
helpful in a broader sense, so thanks again.
Michael
On Fri, Sep 30, 2011 at 1:14 AM, Vincent Michel wrote:
> Hi Michael,
>
> The NISL project on Gith
So is there a bibtex entry anywhere? I just looked around in a few places
because I'd like to cite the project, but I didn't find any entry so instead
I'll just leave a footnote with the sourceforge url---unless anyone has a
better suggestion.
Conrad
On Wed, Sep 21, 2011 at 4:20 PM, Olivier Grise
2011/10/1 mathieu lacage :
> hi,
>
> I am looking for advice on how to pick a classifier among n competing
> classifiers when they are evaluated on more than a single training/test data
> set. i.e., I would like to compare, for each classifier, the set of roc
> curves that are generated from each t
Hi Mathieu,
average the ROC curves across folds (train / test splits) is a way:
http://scikit-learn.sourceforge.net/auto_examples/plot_roc_crossval.html
then you can compare the mean ROC curves for the different algorithms.
Just be careful not to estimate the model parameters using the test set
hi,
I am looking for advice on how to pick a classifier among n competing
classifiers when they are evaluated on more than a single training/test data
set. i.e., I would like to compare, for each classifier, the set of roc
curves that are generated from each training/test data set. Is there an
es