Thank you all for helping out!
Best,
Yuan
On Mon, Dec 16, 2013 at 4:58 PM, Lars Buitinck wrote:
> 2013/12/16 Joel Nothman :
> > In some parts of the codebase, we have near duplicate implementations
> with
> > different names (e.g. classifiers and regressors), but for metrics we
> make
> > impl
2013/12/16 Joel Nothman :
> In some parts of the codebase, we have near duplicate implementations with
> different names (e.g. classifiers and regressors), but for metrics we make
> implicit distinctions on the basis of y. What's the right choice here?
Yes, we've overloaded the metrics too much. L
On Tue, Dec 17, 2013 at 2:19 AM, Arnaud Joly wrote:
> > This is a known problem. Currently Joel proposed a solution
> to this problem in https://github.com/scikit-learn/scikit-learn/pull/2610.
Although part of me thinks this default behaviour will stay, if only to
make scoring='f1' work by defau
Hi,
Your problem is a binary classification task. In that
case, the f1 score function returns the binary classification
f1 score.
In order to get multi class classification score, you have to set pos_label to
None.
For example,
In [2]: gt = [0, 0, 1, 1, 0, 0, 1, 1, 0]
In [3]: from sklearn.metr
Hi,
I am having trouble using the macro and micro averaged f1_score as shown
below
>>> gt = [0, 0, 1, 1, 0, 0, 1, 1, 0];
>>> gt
[0, 0, 1, 1, 0, 0, 1, 1, 0]
>>> pr = [0, 0, 1, 0, 0, 0, 1, 0, 0];
>>> from sklearn.metrics import f1_score
>>> f1_score(gt, pr, average='macro')
0.3
>>> f