Hi Michal,
One way is to roll your own cross validation routine; it's not very
complicated when specialised to a particular task.
I have also previously proposed that cross_val_score and
Randomized/GridSearchCV provide an arbitrary callback parameter that could
return the model or other diagnosti
Hi,
I am working on a problem where, in addition to the cross-validation
scores, I would like to be able to also record the full classifiers for
further analysis (visualisation etc.) Is there a way to do this?
I tried to build a custom scoring function that returns a tuple of
different metrics (i
On 28 March 2014 08:59, Lars Buitinck wrote:
> 2014-03-19 1:15 GMT+01:00 Anitha Gollamudi :
>> Looks like the value error is haunting me still. I am trying to load a
>> multi-label libSVM format data file (sample pasted below) as:
>>
>> X_train, y_train = load_svmlight_file("testtrain.txt", dtype=
Hi,
Thanks for the reply.
I hadn't spotted that I had forgotten to square root the variance. I manually
entered the standard deviations and this helped me narrow it down. The default
min_covar was too high for my data, with this set low enough now work.
Matt.
_
Hi,
there are currently a number of unattended pull requests for sklearn on
github in the area of Gaussian processes. If someone on the mailing list
has experience with sklearn's implementation of GPs, it would be nice to
get some feedback or get the PRs merged in.
Two PRs (https://github.com/s