Hello Lucene Developers,

I’m writing because I have a question regarding the Tokenizer-related test
code.


I was looking at the following code in the link below:

https://github.com/apache/lucene/blob/7b4b0238d7048a0f8532ce55afb72f89dfd69b1c/lucene/test-framework/src/java/org/apache/lucene/tests/analysis/BaseTokenStreamTestCase.java#L1547-L1558


I noticed that the `newAttributeFactory` method, which is used for
Tokenizer testing, contains a random element. I was wondering why
randomness was introduced here.

>From my understanding, random elements should not be included in tests
because they can produce different results on multiple test runs. Was there
a specific reason for this?

If anyone is familiar with the history of this, I’d really appreciate your
insight.


Thank you.

Reply via email to