Hi Michele,
That's interesting - I was wondering what you were doing that you didn't need to
wait for
a response before sending the next request. JMeter isn't built to handle that at this
point.
You should understand that a thread that's waiting for a response from one request
simply
can't send another one until it gets the first response - these reads are blocking
(leaving
the new non-blocking IO of 1.4 out of it for the moment). Therefore, you would still
need to
create as many threads as you needed simultaneous connections.
What you might get away with doing is running JMeter with a very high number of
threads
and including a highly variable timer and work it until the average throughput is what
you
like. This way, there will be periods of greater stress on your server, and periods
of less.
-Mike
On 15 Oct 2002 at 10:28, Michele Curioni wrote:
> Michal,
> I've developed an application to collect votes from users;
> the user doesn't need to browse in the app, but just to send
> a vote. In this scenario I want to test how many votes per second
> the application can handle before going in overload.
> So a thread must represent a vote, and the next request of the same
> thread must be a vote from another user, that is why I don't need a
> delay between two requests of the same user, but something that simulate
> a new user every time.
>
> Without setting any Timer I could probably work out how many votes
> per second the app can register by increasing the number of threads
> until the avg response time is 1 sec.
> But I cannot test what would happen if the number of votes per sec was
> more than that for a while. Like how long would the app cope with a high
> vote frequency before timing out transactions, or worse before causing
> deadlock in the database.
>
> Thanks,
> Michele
>
>
--
Michael Stover
[EMAIL PROTECTED]
Yahoo IM: mstover_ya
ICQ: 152975688
--
To unsubscribe, e-mail: <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>