On 9/29/2011 5:44 AM, [email protected] wrote:
> On Wed, Sep 28, 2011 at 9:50 PM, James Kosin <[email protected]> wrote:
>
>> On 9/28/2011 1:59 PM, [email protected] wrote:
>>> On Wed, Sep 28, 2011 at 1:20 PM, Jörn Kottmann <[email protected]>
>> wrote:
>>>> On 9/28/11 5:24 PM, [email protected] wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am testing the Chunker, but I'm failing to get the same results as in
>>>>> 1.5.1.
>>>>>
>>>>> 1.5.1:
>>>>>
>>>>> Precision: 0.9255923572240226
>>>>> Recall: 0.9220610430991112
>>>>> F-Measure: 0.9238233255623465
>>>>>
>>>>> 1.5.2:
>>>>>
>>>>> Precision: 0.9257575757575758
>>>>> Recall: 0.9221868187154117
>>>>> F-Measure: 0.9239687473746113
>>>>>
>>>>>
>>>>> Maybe it is related to this
>>>>> https://issues.apache.org/**jira/browse/OPENNLP-242<
>> https://issues.apache.org/jira/browse/OPENNLP-242>
>>>>> Or to this related to this:
>>>>>
>>>>> The results of the tagging performance may differ compared to the 1.5.1
>>>>> release, since a bug was corrected in the event filtering.
>>>>>
>>>>> What should we do?
>>>>>
>>>>>
>>>>>
>>>> I guess it is related to OPENNLP-242, I couldn't find the jira for the
>>>> second one,
>>>> but as far as I know it only affects the perceptron. Does anyone
>> remember
>>>> what this
>>>> is about?
>>>>
>>>> Could you undo OPENNLP-242 and see if the result is identical again? You
>>>> could also
>>>> test the model from 1.5.2 with 1.5.1 to see if it was trained different.
>>>>
>>> I undone OPENNLP-242 and got the same result we had in 1.5.1. So it is
>> the
>>> issue 242 indeed.
>>>
>>>
>>>> Anyway I doesn't look like we have a regression here.
>>>>
>>>> Jörn
>>>>
>>> Thanks,
>>> William
>>>
>> William,
>>
>> The training looks like it may be identical.  Could there be something
>> in the changes you did of the evaluator that may be causing the
>> differences?  I'm also getting different results for the namefinder and
>> the output.  The training output is identical to the 1.5.1 series.  But,
>> the F-measure, Recall, and Precision are different.
>>
>> James
>>
> James,
>
> The Chunker evaluator and cross validator tool was not using the sequence
> validator, but the runtime tool was. We fixed that in OPENNLP-242. I tried
> reverting the changes related to the issue and got exactly the same result
> we had in 1.5.1.
>
> I checked the issues we solved in 1.5.2 and there are lots of itens that
> maybe affects the results.
>
> Is the difference you have big? Is it for worse?
>
> Thanks,
> William
>
William,

It is a small change; but, I can't attribute the change to any
difference that would have affected the scores.

The scores got a little worse.

James

Reply via email to