Hi guys,

These threads are juicy...  I would like to add 2 things:

 - first, if you test embeded resources and you get bad results - what then?
There isn't much that can be improved because it's handled by the container
which you take it as it is (not more then what tools like YSlow already tell
you anyway - without load tests!). It's been said in the list time and time
again and my opinion is that load tests had to be done way before your
application has such resources. You have to find ways to test avoiding any
dependencies. For some apps, it's hard, true - take it like a challenge.

 - second, if we talk about ideal test-tools we should also talk about
ideals applications: such applications that will have HTML and the interface
developed completely separated, which will make possible testing server
performance and benchmarking the interface separately with different tools.
That's the only way you can make one server perfomance test and estimate how
your app does on several browsers. In such a case we won't be talking about
why jmeter can't measure rendering, because it won't matter. However, there
aren't that many applications that have these components developed
independently, they usually go hand-in-hand. Rich interface applications are
designed to look good rather than be tested easily (good looking apps you
can sell).

This is not such a big issue because clients with such applications adapt to
real-life behaviour of their application and rarely rely solely on test
results. They shouldn't be advised to rely just on test results. Even more,
the IT of firms with big applications often have tricks up their sleeves to
handle these uncertainties, but mostly they just wait and see if it cracks
:) and just adapt then. Developing an ideal application and testing it with
the perfect tools is more expensive than this approach which in the end
works. However I sympathise with everybody who has to load test such
applications. Been there and most probably will run again into such
applications.

There is a point to this, that tests can't cover all posibilities no matter
how good is the test tool. In the case of new apps you can't really estimate
real users behaviour and in the  case of  working applications, mirroring a
percentage of the real load and give it to your test system seems to me a
better idea to decide how your app is behaving. JMeter is really useful to
generate load and find bugs that occur in load / stress conditions only
before getting at a stage when you can do test with real data. But it's more
important then doing the best tests ever what you monitor on the server
while testing, how can you identify bottlenecks in the application when
something seems fishy.

It's a lot more to this then what I have said and there are tons of
situations, with different specifics of their own - I am aware.

------

Anyway, it doesn't sound like a bad idea to have multiple tcp/ip connections
possible for a single thread (even if I disagree with the reasoning of
testing embeded resources and such). If it would be available, it might turn
out to be a useful enhancement. Can you currently work around this with
beanshell ? (let's say that you are really good with java, can you do this?)

Regards and keep up the good work,
Adrian

On Tue, Nov 2, 2010 at 6:48 PM, sebb <seb...@gmail.com> wrote:

> On 2 November 2010 14:17, Felix Frank <f...@mpexnet.de> wrote:
> >>> 1. Yes, it does even out. In the case of real users, requests will
> arive
> >>> in "groups" of, say, 8 parallel requests, but your server still has to
> >>> service them. 100 clients on a page with 20 embedded resources will
> make
> >>> 2000 requests. The fact that real users do them in parallel matters
> >>> little. To the servers, there are far more requests than it can
> actually
> >>> handle in parallel, so serialization *will* happen.
> >>>
> >> This is a case of poor capacity planning if the the servers cannot
> handle
> >> the load. Ideally there should be as little serialization as possible
> which
> >> ensures high customer satisfaction. If there are past examples of poor
> >> performing systems which you have come across, that doesnt mean the
> future
> >> has to be the same too.
> >
> > In stress test scenarios, you will want to overload your servers,
> > regardless of their power.
> >
> > In other load test scenarios, this may indeed be undesirable, and your
> > mileage will then vary to a greater degree because Jmeter serializes.
> > That's true.
> >
> >>> To put it differently: Given enough threads, the server sees high
> >>> parallelism in requests, and there is no need for the client to try and
> >>> introduce a "higher" degree of parallelism. The server won't notice a
> >>> difference.
> >>>
> >>
> >> The server wont notice a difference but the real time clients would.
> There
> >> is a need for stimulating actual customer behavior otherwise it would be
> >> hardly any high quality load testing.
> >
> > You can always turn to Selenium for absolute realism. But to induce the
> > same levels of load this way, you will need a *lot* more hardware than
> > for a Jmeter test.
> >
> > Take your pick.
> >
> > Jmeter is and should not be Selenium.
> >
> >>> 2. Please see the earlier thread. Deepak Shetty explained in-depth why
> >>> Jmeter (nor any other tool any of us know of) will give you an exact
> >>> estimation. I believe it was this thread:
> >>>
> >>>
> http://jmeter.512774.n5.nabble.com/Test-plan-for-970-page-requests-every-5-min-td2826174.html#a2834078
> >>
> >>
> >> If there are no tools currently in the market, then we should build such
> >> tools. Because customers like reality!
> >
> > I'm not stopping you.
> >
> > I do question your assumption that this is within Jmeter's scope, though.
>
> Agreed - JMeter started life as a server stress tester, and that is
> still its main function.
>
> BTW, it's not possible (in general) to emulate how a browser behaves,
> because every browser behaves differently.
> E.g. IE 6 and 7 behave differently, and each browser can be configured
> differently by the user.
>
> > Regards,
> > Felix
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscr...@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-h...@jakarta.apache.org
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscr...@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-h...@jakarta.apache.org
>
>

Reply via email to