I understand what you tell me. I think the problem is a set of things:

1.- The server has to queue all the requests. Although it can have
enough threads to handle all of them, only one thread can be in
execution at a given time.
2.- Jmeter has the same problem when launching all the requests.
3.- The server needs CPU time to process incoming requests. These means
it may be processing a request and having another not loged, but queued.
4.- The same applies to Jmeter, it can have some threads processing
responses while others didn't send their requests yet.

maybe all this stuff makes it almost impossible getting better results.

Thanks again peter!

El vie, 09-12-2005 a las 16:59, Peter Lin escribió:
> your explanation helps, but here's the thing. Say I want to simulate the /.
> effect. If 5K people all hit /. at the same exact nanosecond, all the
> connections will still be queued up by the server and the webserver will
> process them one at a time. As soon as a server thread/precess starts to
> process the request, it's going to slow down the processing for all
> subsequent requests.
> 
> therefore, it's really hard to do unless the server has lots of CPU's like
> 24 and multiple ethernet cards. On PC hardware, it's going to be very hard,
> if not impossible. On a mainframe, it will be easier to simulate a large
> number of truly concurrent requests.  If you want to reduce the likelihood
> of JMeter being an issue, then I would setup 4 clients to hit a single
> server. Though I really doubt you'll a significant difference. Having done
> these types of tests a few hundred times, it's just hard to do.  beyond
> that, the bandwidth will severely limit the number of concurrent requests
> the server can handle.  only way to avoid the network bottleneck is to pay
> big bucks and co-locate at a backbone provider like MCI, Level3, Quest,
> Global Crossing, or ATT.
> 
> hope that helps.
> 
> peter
> 
> 
> On 12/9/05, Iago Toral Quiroga <[EMAIL PROTECTED]> wrote:
> >
> > First of all, thanks a lot for your answer peter,
> > I comment it between lines:
> >
> > El vie, 09-12-2005 a las 16:00, Peter Lin escribió:
> > > for what it's worth, it's nearly impossible to get all 100 requests
> > within
> > > 500ms. The reason for this is making the initial connection to your
> > > webserver will have a high initial cost.  How many iterations are you
> > using.
> >
> > Just one per threadgroup because I want just 100 requests as close as
> > posible in time. Anyway, I understand what you say about the dificulty
> > of having all 100 request in 500ms. One thing I think has a lot to do
> > with this, besides the connection issue you talk about, is the fact that
> > some threads begin processing their responses before all the threads
> > send their requests, because this impedes other threads to enter into
> > CPU and send their requests, but I guess this is not a Jmeter issue, but
> > a kernel or a JVM matter.
> >
> > > if you look at all formal performance test specifications, they all have
> > a
> > > ramp up time. The actual measurement is taken for a period after the
> > server
> > > has reached a steady state. does that make sense?
> >
> > > what you need to do is set the iterations to something like 1000. start
> > the
> > > test and then start counting from like 10minutes after the test started
> > to
> > > get an accurate measurement.
> > >
> >
> > I get it, but this is not the scenario I want to measure. Besides the
> > scenario you talk about, we also need to know the maximum number of
> > requests the web server can handle if they come "at the same time". So,
> > imagine the web server has no requests to serve, and suddenly, N
> > requests come about at the same time, what we want to know is: how big N
> > can be? or what happens when N is like 50, 100, 300,... ?
> >
> > Notice that I need the server to be "idle" before all the requests come
> > about, because if it's steady serving responses it's  not servinig just
> > N requests, but N plus all the requests it was already serving.
> >
> > Thanks again for hour help.
> > Iago.
> >
> > > On 12/9/05, Iago Toral Quiroga <[EMAIL PROTECTED]> wrote:
> > > >
> > > > El vie, 09-12-2005 a las 15:17, Peter Lin escribió:
> > > > > I'm not sure I understand why you have 100 thread groups.
> > > > >
> > > > > you can put the requests in sequence in 1 threadGroup and increase
> > the
> > > > > thread count to 100 with 0 second ramp up.
> > > > > peter
> > > >
> > > > Because the requests must be different. If I do what you say,
> > > > all the 100 threads within the threadgroup will send the same
> > > > request (the first one in the sequence).
> > > >
> > > > I tried using an interleave controller to avoid such problem, but the
> > > > interleave controller just deals requests for each thread, so the
> > result
> > > > is the same.
> > > >
> > > > Anyway, I've also tried having one thread group and 100 threads within
> > > > it sending the same HTTP request, but I still have the performance
> > > > problem I commented in my previous email.
> > > >
> > > > Iago.
> > > >
> > > >
> > > > >
> > > > > On 12/9/05, Iago Toral Quiroga <[EMAIL PROTECTED]> wrote:
> > > > > >
> > > > > > hi!,
> > > > > >
> > > > > > I'm using Jmeter to perform a peak test of my web server (100 http
> > > > > > requests at the same time). To do such, I've created 100 thread
> > > > groups,
> > > > > > each one with one thread that sends a different http request. At
> > the
> > > > web
> > > > > > server I log the time (in milliseconds) at which each request is
> > > > > > received.
> > > > > >
> > > > > > I need these requests to be sent to the web server as close as
> > posible
> > > > > > but I noticed they are are logged at the web server in a period of
> > > > time
> > > > > > that varies but is never lesser than 0.8 secs.
> > > > > >
> > > > > > ¿Shouldn't jmeter be able to send 100 requests in a leesser period
> > of
> > > > > > time? ¿Is there any way to boost the launching of these requests?
> > > > > >
> > > > > > I've also noticed that, if I enable the option to parse HTML in
> > each
> > > > > > HTTP request (HTTPSampler.image_parser in jmx file), my web server
> > log
> > > > > > tells me that jmeter needs 2 or even more seconds to send all the
> > 100
> > > > > > requests, which leads me to think that some threads start
> > processing
> > > > its
> > > > > > response before all requests have been sent ¿can I change this
> > > > > > behaviour? This is a big problem, because this way, Jmeter is
> > limited
> > > > in
> > > > > > its capacity to send the requests as soon as posible to stress the
> > > > > > server.
> > > > > >
> > > > > > My test machine has the following features:
> > > > > > CPU: 2.4 GHz
> > > > > > RAM: 512 MB
> > > > > > OS:  Debian Linux. Kernel 2.6.12.
> > > > > >
> > > > > > Thanks in advance for your help.
> > > > > > --
> > > > > > Abel Iago Toral Quiroga
> > > > > > Igalia http://www.igalia.com
> > > > > >
> > > > > >
> > ---------------------------------------------------------------------
> > > > > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > > > > For additional commands, e-mail:
> > [EMAIL PROTECTED]
> > > > > >
> > > > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > > For additional commands, e-mail: [EMAIL PROTECTED]
> > > >
> > > >
> > --
> > Abel Iago Toral Quiroga
> > Igalia http://www.igalia.com
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >
-- 
Abel Iago Toral Quiroga 
Ingeniero en Informática
Telf. +34 981 91 39 91 ext.13
Fax   +34 981 91 39 49
mailto:[EMAIL PROTECTED]
Igalia http://www.igalia.com

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to