In my case, I act as a webservice, and I get a constant rate of
inbound requests no matter what the responsivity of my server is like.
As for what I'd expect users to do, I'm with Felix, I think users
click refresh no matter what (as I certainly do when a server gets
unresponsive! maybe it will w
On 28 July 2010 08:13, Felix Frank wrote:
> Hi Deepak,
>
> all of the below is true and quite accurate. The trouble with Jmeter is
> that it is too "patient", and even starting 1000 threads or more won't
> inject the same level of stress to on your server as a couple hundred
> real world users wou
Hey Felix
Namaskara~Nalama~Guten Tag
I did try your scenario with Webload (www.webload.org) and it worked fine.
It could crash the webserver. I am unsure why Jmeter is waiting. Is your
application response time increasing so much, that no mater how much you
increase the threads, nothing happens.
Hi Deepak,
all of the below is true and quite accurate. The trouble with Jmeter is
that it is too "patient", and even starting 1000 threads or more won't
inject the same level of stress to on your server as a couple hundred
real world users would. That's because Jmeter will gladly stand by for
min
Hey
Namaskara~Nalama~Guten Tag
Just another though to this:
If your load is reaching the servers, looks like the max load which your
server system can handle is that of one Jmeter server. When you add more
servers, the throughput will reduce as the max throughput of the system has
already been r
> Or an easier option might be having two scripts, one normal & one with
> timeouts, and running them both at the same time?
When implementing a timeout driven approach, using those two groups is a
must, because the timeout group *will* generate a large number of
errors, and thus screw up your sta
I started the timeout angle last night, and it was showing similar
promise (with similar flaws), but it's great to know I'm on a good
path!
One idea I had overnight (the brain doesn't turn off, even when
sleeping I guess): for things that require a full result (to parse the
output):
-Add a two new
Unlikely. I think the notion of "Jmeter slowing down to accomodate the
load" is quite accurate.
> Is the load reaching the servers from the other machines? Looks like only
> the first machine load is able to reach the servers.
-
Hi,
that sounds familiar, and is more or less the point of my original inquiry.
I managed to work around the problem using Connect and Response Timeouts
for my HTTP Samplers. This is adequate for e.g. stressing an stunnel
reverse proxy. However, I'm now facing a situation where I want the
crushin
Hey
Namaskara~Nalama~Guten Tag
Is the load reaching the servers from the other machines? Looks like only
the first machine load is able to reach the servers.
Deepak
--
Keigu
Deepak
+91-9765089593
deic...@gmail.com
Skype: thumsupdeicool
Google talk: deicool
Blog: http://loveandfearless.wordp
Well, this is weird/irritating. No matter what I do I can't create
more than certain amount of load with JMeter. For example, if I run
one server at full throttle, I might get 75 req/sec. If I run two
servers with the same size thread pool, I then get ~37 req/sec. If I
run three servers with th
I found an old email thread about doing constant rate testing in the
archives. I wanted to kick the idea up again, but first I'll review
the basic situation, and past advice:
-I want to simulate a constant inbound rate, so that if the server
falls behind the inbound load keeps coming and crushes t
12 matches
Mail list logo