I think I found something out:

The reason, because the indexer works while "storing" is, that there is 
actually only one thread, which stores the data.
When the repository starts, the indexer starts multiple "ParsingTasks". I 
think, if many sessions will store the data, there are also many 
indexer-threads, so the error would come up the same.

And the implementation of the AbstractStringBuffer.expandCapacity is very 
dangerous, because if the char[] has 500MB, and we need 510, so the 
expandCapacity() method is called, then the new char[] is initialized with 
500MB * 2, which kills the error (or multiple ParsingTasks expand from 100 to 
200)

Can someone tell me, how you index very large files? Or is it a bad idea in 
general to fulltext-index these big files?

Thanks in advance,
Ulrich

Reply via email to