Hi Michal,

One way is to roll your own cross validation routine; it's not very
complicated when specialised to a particular task.

I have also previously proposed that cross_val_score and
Randomized/GridSearchCV provide an arbitrary callback parameter that could
return the model or other diagnostic information. The right interface for
this sort of thing is uncertain.

Finally, you could consider my "remember" branch:
https://github.com/jnothman/scikit-learn/tree/remember. It provides
sklearn.memo.remember_model, which can wrap your base estimator, and will
save a joblib dump of each model (in the directory specified by the memory
parameter). However, to recover these models, the easiest way is to call
fit() again on the remembered model, with the right portion of training
data (and parameters if using grid search). [I am sorry this requires a
patch/branch rather than a gist, but this functionality necessitates a
polymorphic implementation of sklearn.base.clone.]

Cheers,

- Joel


On 1 April 2014 06:23, Michal Romaniuk <[email protected]>wrote:

> Hi,
>
> I am working on a problem where, in addition to the cross-validation
> scores, I would like to be able to also record the full classifiers for
> further analysis (visualisation etc.) Is there a way to do this?
>
> I tried to build a custom scoring function that returns a tuple of
> different metrics (including the classifier itself) but it didn't work
> as the scoring function seems to be required to return a number.
>
> Thanks,
> Michal
>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to