I am building a standalone java application to generate load on a
system, simulating real world conditions.
The application is multithreaded, using the java.util.concurrent
framework to generate lots of pooled threads, each of which runs
"sessions". When the session is complete, the runnable ends and the
thread is returned to the scheduler pool. Each "session" consists of the
following:
generate HTTP PUT #1 to server
wait x seconds (randomized within logical limit)
generate HTTP PUT #2 to server
generate HTTP PUT #3 to server
wait y seconds (randomized within logical limit)
generate HTTP PUT #4 to server
generate HTTP PUT #5 to server
approximately 2 minutes are taken up all together per session
Each thread created by the concurrent pool maintains a connection
(embodied in an HTTPClient) which is reused by subsequent sessions.
After each request a call to HttpRequest.releaseConnection() is made, as
is recommended in many online sources.
It all works pretty well. But perhaps it works TOO well.
While keeping the connections open and releasing them gives optimum
performance, since I am building a simulator, I really DON'T want
optimal performance. I want to simulate SUBOPTIMAL performance. I want
the server to have to go through connection establishment on each
session as if the sessions were coming from different computers, not
from the same one.
I want to create the connection (embedded in an HTTP client) at the
beginning of each session and close it at the end of the session.
It seems that HttpClient, by default, uses connection pooling. How do I
defeat this? I've tried closing the connection at the completion of
each session, but then it can no longer be used.
What is the correct way to achieve the simulation I am trying to create
with HttpClient?
Thanks.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]