: Re: Nio File Caching Performance Test
My search process is using MMapDirectory on a read-only index via:
-Dorg.apache.lucene.FSDirectory.class=org.apache.lucene.store.MMapDirectory
Another indexing process is building the next version of the index in a
different directory. When it's time
Hi,
According to my humble tests, there is no significant improvement
either. NIO has buffer creation time costs compared to other Buffered
IOs. However, a testbed would be ideal for benchmarks.
Murat
Doug Cutting wrote:
Robert Engels wrote:
The most important statistic is that the
PROTECTED]
To: java-dev@lucene.apache.org; [EMAIL PROTECTED]
Sent: Tuesday, 16 May, 2006 6:10:07 PM
Subject: Re: Nio File Caching Performance Test
On 5/16/06, Robert Engels [EMAIL PROTECTED] wrote:
SO, I would like to use a memory mapped reader, but I encounter OOM errors
when mapping large files
From: Yonik Seeley [EMAIL PROTECTED]
To: java-dev@lucene.apache.org; [EMAIL PROTECTED]
Sent: Tuesday, 16 May, 2006 6:10:07 PM
Subject: Re: Nio File Caching Performance Test
On 5/16/06, Robert Engels [EMAIL PROTECTED] wrote:
SO, I would like to use a memory mapped reader, but I encounter OOM
On May 12, 2006, at 3:38 PM, Robert Engels wrote:
I finally got around to making the NioFSDirectory with caching 1.9
compliant. I also produced a performance test case.
How does this implementation compare to the MMapDirectory?
I've found that the MMapDirectory is far faster than the
To: java-dev@lucene.apache.org
Subject: Re: Nio File Caching Performance Test
On May 12, 2006, at 3:38 PM, Robert Engels wrote:
I finally got around to making the NioFSDirectory with caching 1.9
compliant. I also produced a performance test case.
How does this implementation compare
On May 15, 2006, at 5:41 PM, Robert Engels wrote:
As stated in the email, it is 3x faster reading from a Java local
cache,
then having Java go to the OS (where it may or may not be cached).
It avoids
the overhead/context switch into the OS.
I read that in the original mail, but your
Robert Engels wrote:
The most important statistic is that the reading via the local cache, vs.
going to the OS (where the block is cached) is 3x faster (22344 vs. 68578).
With random reads, when the block may not be in the OS cache, it is 8x
faster (72766 vs. 586391).
[ ... ]
This test only