fthevenet commented on code in PR #12212:
URL: https://github.com/apache/lucene/pull/12212#discussion_r1150277185
##########
lucene/facet/src/test/org/apache/lucene/facet/TestDrillSideways.java:
##########
@@ -2039,4 +2012,72 @@ public void testExtendedDrillSidewaysResult() throws
Exception {
writer.close();
IOUtils.close(searcher.getIndexReader(), taxoReader, taxoWriter, dir,
taxoDir);
}
+
+ @Test
+ public void testDrillSidewaysSearchUseCorrectIterator() throws Exception {
+ Directory dir = newDirectory();
+ var iwc = new IndexWriterConfig(new Analyzer() {
+ @Override
+ protected Analyzer.TokenStreamComponents createComponents(final String
fieldName) {
Review Comment:
That's fair. As you've noticed, the ngram tokenizer has nothing to do with
the root cause: it is only there to multiply the number of postings in docs,
which in terms increase the probability that a consistent failure case arises.
But of course the same can be achieved by simply crafting the index that way
explicitly, and I agree that extra code and dependency just adds noise and gets
in the way of understanding what the test is about.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]