Details are very welcome as I'm still not sure whether I fully
understand the problem :)

Thread pooling does sound nice and it might enable a feature I'd like:

On-demand creation of threads. Say you're looping 10 threads, system
is under no load.
Currently: if you want 10 more users you need to stop the test, add 10
users and start it again.

It would be nice if it where possible to simply say '+10', followed by
some monitoring and then another '+10' (30 threads running total) etc.
(10 is an example number, +/- n users would be the goal)
Mostly for the 'getting a feel for the system' part of the test, the
initial setup of it all.

Regards,
Daniel

On Fri, Jun 15, 2012 at 7:24 AM, Kirk Pepperdine
<kirk.pepperd...@gmail.com> wrote:
> Hi,
>
> I'm very happy to add details if need be.
>
> Regards,
> Kirk
>
> On 2012-06-15, at 12:13 AM, Philippe Mouawad wrote:
>
>> Bugzilla created:
>>
>>   - https://issues.apache.org/bugzilla/show_bug.cgi?id=53418
>>
>>
>> Regards
>>
>> Philippe M.
>>
>> On Thu, Jun 14, 2012 at 10:30 PM, Philippe Mouawad <
>> philippe.moua...@gmail.com> wrote:
>>
>>> Hi M. Pepperdine,
>>>
>>> On Wed, Jun 13, 2012 at 7:05 AM, Kirk Pepperdine <
>>> kirk.pepperd...@gmail.com> wrote:
>>>
>>>> Hi Sebb,
>>>>
>>>> We've had this conversation before and I did some preliminary work to
>>>> setup a different type of thread group but the couplings between the
>>>> existing thread group and the model meant that an extensive refactoring
>>>> would be involved. Since that involves a *lot* more than just a simple
>>>> plugin...
>>>>
>>>> So, the current implementation supports a closed system model meaning,
>>>> rate of entry into the system equals rate of exit from the system.This is
>>>> exactly what you want if you're load testing a call centre where the number
>>>> of servers (operators) is fixed and gate entry into the system. However,
>>>> I'm often simulating open systems which means I do not want rate of entry
>>>> into the system to be controlled by the performance of the system (rate of
>>>> exit).
>>>
>>> What makes you think JMeter does this ?
>>>
>>>
>>>> More over, those that attend my performance tuning seminars come to
>>>> understand why this is an important aspect of getting your test environment
>>>> right and test harness correctly setup as it can adversely affect the
>>>> quality of your test which can and often does, change the results of the
>>>> test.
>>>>
>>>
>>>> As an example, today I will show a group how to tune an application by a
>>>> partner company. That application has a number of "performance problems"
>>>> backed into it. If I use the traditional means of using JMeter I will find
>>>> a different set of performance issues than if I load with a pattern that is
>>>> similar that found in production.
>>>
>>> Can you clarify this point ? a figure might be better than a long text
>>>
>>>
>>>> In other words, with this particular application, JMeter exposes
>>>> "problems" that are artifacts of how it wants to inject load on a system.
>>>
>>> Not clear for me.
>>>
>>> I can fix all of these problems
>>>
>>> What are these problems ? and how do you fix these ?
>>>
>>>
>>>> and eventually I'll get to a point where I'll fix everything that needs
>>>> to be fixed. That said, if I can coerce JMeter to load as an open system
>>>> I'll get to the problems without having to fix the artifacts (the things
>>>> that really don't need fixing).
>>>
>>> Still not clear
>>>
>>>> To coerce JMeter into being an open system requires one to use a large
>>>> number of very short lived threads. So I may only have 400-500 active
>>>> threads at any point in time but in order to achieve that load over a 1 or
>>>> two hour test I may have to specify 10s of thousands of threads. Since all
>>>> of the threads are created up front, this simply doesn't work.
>>>>
>>>> You might ask why not just specify 400-500 threads and loop over them? in
>>>> theory you'd think it would work but as you tune the system and the
>>>> performance characteristics change. Going back to the baked application,
>>>> before I start tuning, the active user count is several thousand. In other
>>>> words, the tuned system is better able to clear users out and that changes
>>>> the performance profile in a way that hard to emulate with the current
>>>> looping behaviour. Using a setting of looping 400 or so threads isn't
>>>> adequate for the initial load tests as the test harness will become thread
>>>> starved and that releases pressure on the server which in turn changes the
>>>> performance profile.
>>>>
>>>
>>>
>>>> With all due respect to the wonderful work that everyone on the project
>>>> has done, it is my opinion that the one user == one thread is a design
>>>> mistake that has a huge impact on both the usability of the tool
>>>>
>>> Examples ?
>>>
>>>> and the quality of the results
>>>
>>> I disagree with this assertion . We have been using JMeter for load
>>> testing all kind of applications Intranets / Large ECommerce Systems /
>>> Backoffice systems / , and quality of results is good provided you
>>> configure it properly.
>>> Particularly when using Remote  Testing. Lot of users in this mailing list
>>> use it and are satisfied (I think).
>>>
>>>
>>>> one can achieve when using it. IMHO, moving to an thread pool/event heap
>>>> based model would be an enormous improvement over the current
>>>> implementation.
>>>>
>>>> Agree it would be better. We will work on it.
>>>
>>>> Regards,
>>>> Kirk
>>>>
>>>> On 2012-06-13, at 1:02 AM, sebb wrote:
>>>>
>>>>> On 12 June 2012 22:57, Kirk Pepperdine <kirk.pepperd...@gmail.com>
>>>> wrote:
>>>>>>
>>>>>> On 2012-06-13, at 12:54 AM, sebb wrote:
>>>>>>
>>>>>>> On 12 June 2012 22:06, Kirk Pepperdine <kirk.pepperd...@gmail.com>
>>>> wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I figured thread pooling would be revolutionary so I wasn't
>>>> suggesting that. I would be very useful just delay the creation of a thread
>>>> until it was asked for.
>>>>>>>
>>>>>>> Not sure I understand how it would help to delay the thread creation,
>>>>>>> except perhaps for the case where the first threads have finished
>>>>>>> processing by the time the last threads start running samples.
>>>>>>
>>>>>> Bingo!!! ;-)
>>>>>
>>>>> So what percentage of use cases need to follow this model?
>>>>>
>>>>> Most of the JMeter testing I have done was long running tests where
>>>>> all threads were active for most of the run.
>>>>>
>>>>>> Kirk
>>>>>>
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>>>>>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>>>>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>>>>
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>>>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>>>
>>>>
>>>
>>>
>>> --
>>> Cordialement.
>>> Philippe Mouawad.
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Cordialement.
>> Philippe Mouawad.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
> For additional commands, e-mail: user-h...@jmeter.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
For additional commands, e-mail: user-h...@jmeter.apache.org

Reply via email to