Thanks that clears it up. Sorry to confuse things with a basic java mistake.
I think Lucene is a great library. My question for the future is whether
others perceive a need for expanding the capabilities of Document or Field
Boosts. I feel they are limited by both the number of allowed boost value
etSimilarity?
Dan
-Original Message-
From: Doug Cutting [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 16, 2004 9:28 PM
To: Lucene Developers List
Subject: Re: Explanations and overridden similarity
Dan Climan wrote:
> Shouldn't the call to Similarity.decodeNorm
I've been testing a new similarity that overrides encodeNorm and decodeNorm
using Lucene 1.4.2.
While testing I've been running queries with explanations. I notice that in
the TermQuery.TermWeight class in the explain method is the following:
Explanation fieldNormExpl = new Explanation();
I'm experimenting with Document boosts and I'm finding them effective for
certain types of scoring enhancements. My concern is that because of the way
they are stored (ie an encoded byte). There are not enough boost values to
cover the typical boosting. I've written a custom Similarity function (ie
I wanted to test several strategies for Document Boosting. It seems like
the only way to do this was to reindex every Document and do setBoost. This
will take a long time. I had an idea for how to do this without reindexing
and I was curious if there was a better strategy or if there were additiona
I was trying to test whether the Document Boosts I calculate and add during
indexing were being preserved correctly.
I understand that what's actually preserved by default is Field Boost *
Document Boost * lengthNorm
I'm using default similarity and initially had no field boosts or document
boo