Andrzej Bialecki wrote:
Doug Cutting wrote:
The graph just shows that they differ, not how much better or worse they are, since the baseline is not perfect. When the top-10 is 50% different, are those 5 different hits markedly worse matches to your eye than the five they've displaced, or are they comparable? That's what really matters.

Hmm. I'm not sure I agree with this. Your reasoning would be true if we were changing the ranking formula. But the goal IMHO with these patches is to return equally complete results, using the same ranking formula.

But we should not assume that the ranking formula is perfect. Imagine a case where the high-order bits of the score are correct and the low-order bits are random. Then an optimization which changes local orderings does not actually affect result quality.

I specifically avoided using normalized scores, instead using the absolute scores in TopDocs. And the absolute scores in both cases are exactly the same, for those results that are present.

What is wrong is that some results that should be there (judging by the ranking) are simply missing. So, it's about the recall, and the baseline index gives the best estimate.

Yes, this optimization, by definition, hurts recall. The only question is does it substantially hurt relevance at, e.g., 10 hits. If the top-10 are identical then the answer is easy: no, it does not. But if they differ, we can only answer this by looking at results. Chances are they're worse, but how much? Radically? Slightly? Noticiably?

What part of Nutch are you trying to avoid? Perhaps you could try measuring your Lucene-only benchmark against a Nutch-based one. If they don't differ markedly then you can simply use Nutch, which makes it a stronger benchmark. If they differ, then we should figure out why.

Again, I don't see it this way. Nutch results will always be worse than pure Lucene, because of the added layers. If I can't improve the performance in Lucene code (which takes > 85% time for every query) then no matter how well optimized Nutch code is it won't get far.

But we're mostly modifying Nutch's use of Lucene, not modifying Lucene. So measuring Lucene alone won't tell you everything, and you'll keep having to port Nutch stuff. If you want to, e.g., replay a large query log to measure average performance, then you'll need things like auto-filterization, n-grams, query plugins, etc., no?

In several installations I use smaller values of slop (around 20-40). But this is motivated by better quality matches, not by performance, so I didn't test for this...

But that's a great reason to test for it! If lower slop can improve result quality, then we should certainly see if it also makes optimizations easier.

I forgot to mention this - the tests I ran already used the smaller values: the slop was set to 20.

Are they different if the slop is Integer.MAX_VALUE? It would be really good to determine what causes results to diverge, whether it is multiple terms (probably not) phrases (probably) and/or slop (perhaps). Chances are that the divergence is bad, that results are adversely affected, and that we need to try to fix it. But to do so we'll need to understand it.

That's another advantage of using Lucene directly in this script - you can provide any query structure on the command-line without changing the code in Nutch.

But that just means that we should set the SLOP constant in BasicQueryFilter.java from a configuration property, and permit the setting of configuration properties from the command line, no?

Doug

Reply via email to