Understand why NRT performance is affected by flush frequency
-------------------------------------------------------------
Key: LUCENE-2143
URL: https://issues.apache.org/jira/browse/LUCENE-2143
Project: Lucene - Java
Issue Type: Bug
Components: Index
Reporter: Michael McCandless
Assignee: Michael McCandless
Fix For: 3.1
In LUCENE-2061 (perf tests for NRT), I test NRT performance by first
getting a baseline QPS with only searching, using enough threads to
saturate.
Then, I pick an indexing rate (I used 100 docs/sec), and index docs at
that rate, and I also reopen a NRT reader at different frequencies
(10/sec, 1/sec, every 5 seconds, etc.), and then again test QPS
(saturated).
I think this is a good approach for testing NRT -- apps can see, as a
function of "freshness" and at a fixed indexing rate, what the cost is
to QPS. You'd expect as index rate goes up, and freshness goes up,
QPS will go down.
But I found something very strange: the low frequency reopen rates
often caused a highish hit to QPS. When I forced IW to flush every
100 docs (= once per second), the performance was generally much
better.
I actually would've expected the reverse -- flushing in batch ought to
use fewer resoruces.
One theory is something odd about my test env (based on OpenSolaris),
so I'd like to retest on a more mainstream OS.
I'm opening this issue to get to the bottom of it...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]