On 9/28/11 5:24 PM, [email protected] wrote:
Hi,

I am testing the Chunker, but I'm failing to get the same results as in
1.5.1.

1.5.1:

Precision: 0.9255923572240226
Recall: 0.9220610430991112
F-Measure: 0.9238233255623465

1.5.2:

Precision: 0.9257575757575758
Recall: 0.9221868187154117
F-Measure: 0.9239687473746113


Maybe it is related to this
https://issues.apache.org/jira/browse/OPENNLP-242

Or to this related to this:

The results of the tagging performance may differ compared to the 1.5.1
release, since a bug was corrected in the event filtering.

What should we do?



I guess it is related to OPENNLP-242, I couldn't find the jira for the second one, but as far as I know it only affects the perceptron. Does anyone remember what this
is about?

Could you undo OPENNLP-242 and see if the result is identical again? You could also
test the model from 1.5.2 with 1.5.1 to see if it was trained different.

Anyway I doesn't look like we have a regression here.

Jörn

Reply via email to