On Tue, 2014-05-20 at 15:04 +0200, De Simone, Alessandro wrote: Toke: > > Using the calculator, I must admit that it is puzzling that you have > 2432 / 143 = 17.001 times the amount of seeks with 16 segments. > > Do you have any clue? Is there something I could test?
If your segmented index was markedly larger than the optimized, I would say you had a lot of redundancy across segments, but this is not the case. Alas, someone with better knowledge of Lucene internals will have to step up. > I don’t have the budget to change the hardware and it would be > difficult for me to justify replacing a working hardware just to handle > the same amount of data :-( You are changing a system from being heavily optimized towards search to be balanced between updates and search. There seems to be an assumption that this will be without a change to hardware requirements, which I find to be quite optimistic. > Anyway, I certainly would have noticed a performance hit sooner or later if I > had a SSD. That is trivially true for any hardware. The question is how much scale an upgrade will buy you. We have been using SSDs in our search servers since late 2008. Some observations you might find relevant: https://sbdevel.wordpress.com/2013/06/06/memory-is-overrated/ - Toke Eskildsen, State and University Library, Denmark --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org