Le 8/14/12 4:57 PM, Brendan Crowley a écrit :
The Client complains that it times-out receiving the response, so the response
is within the thread/queue.
If the client stops its load test (stops sending requests) the server is seen
to continue on to finally process the responses.
Rates currently tested are in the region of 4,000 transactions per second. We
would hope to aim for 10,000 transactions per second.
Architecture is pretty simple, as you can be in the applicationContext beans in
my earlier email.
Here, what's is going to happen is that the client will totally flood
the server with requests. The ExecutorFilter will process all the
incoming requests in separate threads, and the response will be pushed
back into a queue, until the client have all been processed.
The client are being processed by an IoProcessor, which loop on the
selector, and process first all the reads, then flush all the events
back to the client. If the client does not read enough, the socket will
be full, then we will loop again waiting for a OP_WRITE event to happen
before being able to write some more data. But if the client sent data
in the meantime, then those incoming data will be processed.
It's obvisously easy to see how this can lead to an OOM.
What if you remove the Executor Filter from the chain, and use more
IoProcessor instead ?
--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com