[ 
https://issues.apache.org/jira/browse/LUCENE-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007103#comment-15007103
 ] 

Adrien Grand commented on LUCENE-6896:
--------------------------------------

+1

I'm curious what the reasoning is for
{code}NORM_TABLE[0] = 1.0f / NORM_TABLE[255];{code}

Is it just a way to get a high float value that would be unlikely to overflow 
to Infinity (eg. when multiplied) or is it more than that?

> Fix/document various Similarity bugs around extreme norm values
> ---------------------------------------------------------------
>
>                 Key: LUCENE-6896
>                 URL: https://issues.apache.org/jira/browse/LUCENE-6896
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Robert Muir
>             Fix For: 6.0, 5.4
>
>         Attachments: LUCENE-6896.patch
>
>
> Spinoff from LUCENE-6818:
> [~iorixxx] found problems with every Similarity (except ClassicSimilarity) 
> when trying to test how they behave on every possible norm value, to ensure 
> they are robust for all index-time boosts.
> There are several problems:
> 1. buggy normalization decode that causes the smallest possible norm value 
> (0) to be treated as an infinitely long document. These values are intended 
> to be encoded as non-negative finite values, but going to infinity breaks 
> everything.
> 2. various problems in the less practical functions that already have 
> documented warnings that they do bad things for extreme values. These impact 
> DFR models D, Be, and P and IB distribution SPL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to