The use model is to be able to handle bursts of heavy query load with
acceptable latency. The test program's threads send requests continuously
with a 50 msec delay in between. A separate thread reads all the results.
This is actually much harsher than the expected load and the latency is
high, but it helps in measuring the limits of the system.

Peter

On 5/19/06, Yonik Seeley <[EMAIL PROTECTED]> wrote:

On 5/19/06, Peter Keegan <[EMAIL PROTECTED]> wrote:
> The client test program blasts queries from >50 threads over a socket
> and runs on a separate server from Lucene. I can get much higher rates
by
> just blasting from a single thread in the client, but this doesn't
simulate
> the real use model.

Wow... what is the real use model?  Do you mean 50 threads each making
requests as fast as they can (sending a new request as soon as they
get a response from the previous)?

Normally, if you have 50 outstanding requests at a time, your server
is clearly overloaded and you need more servers...

Do you get acceptable latency with 50 clients at a time?


-Yonik
http://incubator.apache.org/solr Solr, the open-source Lucene search
server

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to