Have you tried explicitly setting norms on/off the way you want with
Field.setOmitNorms(boolean)?
--
Ian.
On Thu, Nov 18, 2010 at 12:54 AM, Pulkit Singhal
pulkitsing...@gmail.com wrote:
Based on my experimentation and what it says in the Lucene 2nd edition book:
Using a KeywordAnalyzer on
BWT,for some condition-required search I can make the condition as a filter
and then filter the result.
Also I can build a BooleanQuery according to the condition just like the
code in the range search,I wonder which is better?
2010/11/18 yang Yang m4ecli...@gmail.com
Thank you very much!!! :)
Thanks Ian,
Yup that would do the trick for me, it seems.
Also I would like to say that the following also worked, I only
realized it after I went through the scores coming from my results
step by step:
KeywordAnalyzer + Index.ANALYZED (index-time norms were present)
Cheers!
On Thu, Nov 18,
Dear Lucene group,
I wrote my own Scorer by extending Similarity. The scorer works quite
well, but I would like to ignore the fieldnorm value. Is this somehow
possible during search time? Or do I have to add a field indexed with
no_norm?
Best,
Philippe
Hi Michael,
Thanks for your answer and sorry for my late reply.
Are you using compound file format (the default)?
Yes I am using the compound file format as default.
If you turn that off (just for this test) do you still see that
IndexWriter is holding open the files (35 in your example)
Hello,
I was wondering if there is any API call in Lucene that allows
something like the following:
Step 1: Take the user input
hello world you are beautiful
Step 2: QueryParser does its thing
defaultField:hello world defaultField:you defaultField:are
defaultField:beautiful
Step 3: And somehow
On Thu, Nov 18, 2010 at 10:10 AM, Thomas Rewig tre...@mufin.com wrote:
Hi Michael,
Thanks for your answer and sorry for my late reply.
Are you using compound file format (the default)?
Yes I am using the compound file format as default.
If you turn that off (just for this test) do you
Wow, you live in a really great country and attend an awesome
university where they have classes like Text Analytics I'm gonna
send my kid there to study :)
In all seriousness I think the problem may be with how you are
collecting your results.
I find this very amusing:
80. 896889 phrase occurs
Briefly looked at your code and there is no way that I'm right about
this but I'll say it anyway:
Every single field you index doesn't have any NORMS so how will the
scoring happen?
It probably happens based on the matches at query time but its not
like you are specifying any boosts in you query.
hmm ok i tried it but to no avail.
It would have confused me even more to be honest.
actually i would not have used a Document Collector at all, because I
was supposed to give all results even when queried the. What i mean is
that i would not need the score at all. I just didn't know how ;)
I finally bucked up and made the change to CheckIndex to verify that I do not,
in fact, have any fields with norms in this index. The result is below - the
largest segment currently is #3, which 300,000+ fields but no norms.
-Mark
Segments file=segments_acew numSegments=9
: I am getting a mal-formatted json response when using the
: stats component with a facet that returns a stddev value of NaN ,e.g
I'm not a JSON expert, but i suspect NaN just isn't legal JSON, and the
JSON response writer has a bug.
quick google search...
12 matches
Mail list logo