heh, i just don't think thats the typical case. Its definitely extreme. Even still, in many cases using the filesystem (properly warmed) with compression might still be better. It depends how you are measuring latency. storing your whole index in gigabytes of heap ram without any compression on a huge heap has consequences too.
On Thu, Feb 12, 2015 at 4:52 PM, Benson Margulies <ben...@basistech.com> wrote: > WHOOPS. > > First sentence was, until just before I clicked 'send', > > "Hardware has .5T of RAM. Index is relatively small (20g) ..." > > > On Thu, Feb 12, 2015 at 4:51 PM, Benson Margulies <ben...@basistech.com> > wrote: >> Robert, >> >> Let me lay out the scenario. >> >> Hardware has .5T of Index is relatively small. Application profiling >> shows a significant amount of time spent codec-ing. >> >> Options as I see them: >> >> 1. Use DPF complete with the irritation of having to have this >> spurious codec name in the on-disk format that has nothing to do with >> the on-disk format. >> 2. 'Officially' use the standard codec, and then use something like >> AOP to intercept and encapsulate it with the DPF or something else >> like it -- essentially, a do-it-myself alternative to convincing the >> community here that this is a use case worthy of support. >> 3. Find some way to move a significant amount of the data in question >> out of Lucene altogether into something else which fits nicely >> together with filling memory with a cache so that the amount of >> codeccing drops below the threshold of interest. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org