On 13 June 2012 06:05, Kirk Pepperdine <kirk.pepperd...@gmail.com> wrote:
> Hi Sebb,
>
> We've had this conversation before and I did some preliminary work to setup a 
> different type of thread group but the couplings between the existing thread 
> group and the model meant that an extensive refactoring would be involved. 
> Since that involves a *lot* more than just a simple plugin...
>
> So, the current implementation supports a closed system model meaning, rate 
> of entry into the system equals rate of exit from the system. This is exactly 
> what you want if you're load testing a call centre where the number of 
> servers (operators) is fixed and gate entry into the system. However, I'm 
> often simulating open systems which means I do not want rate of entry into 
> the system to be controlled by the performance of the system (rate of exit). 
> More over, those that attend my performance tuning seminars come to 
> understand why this is an important aspect of getting your test environment 
> right and test harness correctly setup as it can adversely affect the quality 
> of your test which can and often does, change the results of the test.
>
> As an example, today I will show a group how to tune an application by a 
> partner company. That application has a number of "performance problems" 
> backed into it. If I use the traditional means of using JMeter I will find a 
> different set of performance issues than if I load with a pattern that is 
> similar that found in production. In other words, with this particular 
> application, JMeter exposes "problems" that are artifacts of how it wants to 
> inject load on a system. I can fix all of these problems and eventually I'll 
> get to a point where I'll fix everything that needs to be fixed. That said, 
> if I can coerce JMeter to load as an open system I'll get to the problems 
> without having to fix the artifacts (the things that really don't need 
> fixing). To coerce JMeter into being an open system requires one to use a 
> large number of very short lived threads. So I may only have 400-500 active 
> threads at any point in time but in order to achieve that load over a 1 or 
> two hour test I may have to specify 10s of thousands of threads. Since all of 
> the threads are created up front, this simply doesn't work.
>
> You might ask why not just specify 400-500 threads and loop over them? in 
> theory you'd think it would work but as you tune the system and the 
> performance characteristics change. Going back to the baked application, 
> before I start tuning, the active user count is several thousand. In other 
> words, the tuned system is better able to clear users out and that changes 
> the performance profile in a way that hard to emulate with the current 
> looping behaviour. Using a setting of looping 400 or so threads isn't 
> adequate for the initial load tests as the test harness will become thread 
> starved and that releases pressure on the server which in turn changes the 
> performance profile.

Not sure I follow how using a thread pool will help, if at some point
you still need to have thousands of active users. Surely that will
need 1000s of pool entries?

> With all due respect to the wonderful work that everyone on the project has 
> done, it is my opinion that the one user == one thread is a design mistake 
> that has a huge impact on both the usability of the tool and the quality of 
> the results one can achieve when using it. IMHO, moving to an thread 
> pool/event heap based model would be an enormous improvement over the current 
> implementation.

That may be so, but probably means a complete redesign.
This would likely mean all 3rd party plugins would have to be reworked as well.

> Regards,
> Kirk
>
> On 2012-06-13, at 1:02 AM, sebb wrote:
>
>> On 12 June 2012 22:57, Kirk Pepperdine <kirk.pepperd...@gmail.com> wrote:
>>>
>>> On 2012-06-13, at 12:54 AM, sebb wrote:
>>>
>>>> On 12 June 2012 22:06, Kirk Pepperdine <kirk.pepperd...@gmail.com> wrote:
>>>>> Hi,
>>>>>
>>>>> I figured thread pooling would be revolutionary so I wasn't suggesting 
>>>>> that. I would be very useful just delay the creation of a thread until it 
>>>>> was asked for.
>>>>
>>>> Not sure I understand how it would help to delay the thread creation,
>>>> except perhaps for the case where the first threads have finished
>>>> processing by the time the last threads start running samples.
>>>
>>> Bingo!!! ;-)
>>
>> So what percentage of use cases need to follow this model?
>>
>> Most of the JMeter testing I have done was long running tests where
>> all threads were active for most of the run.
>>
>>> Kirk
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
> For additional commands, e-mail: user-h...@jmeter.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
For additional commands, e-mail: user-h...@jmeter.apache.org

Reply via email to