[ 
https://issues.apache.org/jira/browse/OPENNLP-722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14175062#comment-14175062
 ] 

Vinh Khuc commented on OPENNLP-722:
-----------------------------------

Looks like this issue comes from changes in Java 8's floating point arithmetic. 

If in this line 
https://github.com/apache/opennlp/blob/trunk/opennlp-tools/src/main/java/opennlp/tools/ml/perceptron/PerceptronTrainer.java#L264
 stepsize is changed from double to float, the test will pass although the 
training accuracy is different. 

stepsize only involves in the expression "stepsize*values[ei][ci]" during 
estimating params. I notice that values[ei][ci] is actually float. 

Still haven't figured out why. 

> PerceptronPrepAttachTest fails only on Java 8
> ---------------------------------------------
>
>                 Key: OPENNLP-722
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-722
>             Project: OpenNLP
>          Issue Type: Bug
>          Components: Build, Packaging and Test
>    Affects Versions: 1.6.0
>            Reporter: Joern Kottmann
>            Assignee: Joern Kottmann
>            Priority: Minor
>             Fix For: 1.6.0
>
>
> The test 
> PerceptronPrepAttachTest.testPerceptronOnPrepAttachDataWithStepSizeDecrease 
> fails if executed on Java 8.
> It would be really nice to track down the cause of that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to