Todd Long wrote
> I'm curious as to where the loss of precision would be when using
> "-(Double.MAX_VALUE)" as you mentioned? Also, any specific reason why you
> chose that over Double.MIN_VALUE (sorry, just making sure I'm not missing
> something)?

So, to answer my own question it looks like Double.MIN_VALUE is somewhat
misleading (or poorly named perhaps?)... from the javadoc it states "A
constant holding the smallest positive nonzero value of type double". In
this case, the cast to int/long would result in 0 with the loss of precision
which is definitely not what I want (and back to the original issue). It
would certainly seem that -Double.MAX_VALUE would be the way to go! This is
something that I was not aware of with Double... thank you.


Chris Hostetter-3 wrote
> ...i mention this as being a workarround for floats/doubles because the 
> functions are evaluated as doubles (no "casting" or "forced integer 
> context" type support at the moment), so with integer/float fields there 
> would be some loss of precision.

I'm still curious of whether or not there would be any cast issue going from
double to int/long within the "def()" function. Any additional details would
be greatly appreciated.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Numeric-Sorting-with-0-and-NULL-Values-tp4232654p4233361.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to