On 03/03/2008, Oren Benjamin <[EMAIL PROTECTED]> wrote:
> Thanks for your response.  Since, my first post, I've taken JMeter for a
>  spin and was pleasantly surprised by how simple it was to set up a test plan
>  with Access Log Sampler to play back all the requests in my log.

I'm glad to hear it.

>  I'm now ready to try and overcome the two hurdles we've discussed.  The
>  first involves the timing of the requests.  Based on your response ("the
>  sampler would need to keep track of the timestamps...") I take it the way to
>  go is to develop a custom Sampler, similar to Access Log Sampler, that also
>  handles the timestamps in the log.  Are you recommending to accomplish the
>  delay within the sampler itself or to create a custom Timer as well?

I think it's going to be easiest to perform the delay in the sampler
itself - but see below for other options.

>  As for the threading, I'm not terribly concerned with simulating specific
>  users at the moment.  It would certainly be a nice feature down the road,
>  but for now, I'm more concerned with simply getting the requests
>  multi-threaded (as I'm completely new to JMeter and JMeter development, I
>  believe it's best to start small).

Each thread represents a different user, and will have its own set of
cookies, so mixing requests from different original users may or may
not work, depending on the server   application.

>  You wrote in your reply: "If the total number of different users is known in
> advance, then one can just parcel out the samples to different threads."
>
> How would this be accomplished?  Would I need to build a custom
>  ThreadGroup?

I don't think so. That would not be trivial.

>  I haven't yet looked at the interplay between ThreadGroup and
>  Sampler.

Thread Groups contain one or more threads; the each thread processes
the test plan defined under that thread group. Samplers are processed
as defined in the test plan.

> Ignoring users for now, I'd like to divide all the requests among
>  a fixed number of threads.

That's how the current Access Log sampler works; when the next sampler
in a thread is executed it fetches the next line from the file. But as
mentioned above this may not work for all applications.

Just adding a timed wait to the existing sampler should be easy.

Distributing the sessions amongst threads is trickier.

>  As far as the timing is concerned, I would design it as follows.  Every
>  request (sample) is executed at the time indicated by its timestamp relative
>  to the start of the test.  If no thread is available at that time, it is
>  executed as soon as a thread becomes available.

Yes, but it's the samplers that drive the test.
They need to ask for the data - or at least indicate that they are
ready for the data.

Each thread could maintain its own file reader, but that does not scale well.

It looks like one could perhaps use an NIO FileChannel and use
position() to keep track of the position of each thread. I don't know
if that would scale well.

To avoid the problem of multiple file readers, there could be a queue
for each sampler, and a separate reader thread that put the samples on
the queue.

The reader thread would need to keep track of which sampler thread was
handling each session (and which threads were free). It would also
need to handle the timing, otherwise for a large file the queues would
grow a lot. [The queue items could contain the desired start-time and
the sampler could log a warning if it was picked up late]

>  As for sample logs, the Common Log Format (or indeed any log format) is fine
>  as I can always run my logs through a preprocessing script before loading
>  them into JMeter.  The important thing is that the entire request and the
>  corresponding timestamp are contained in the log entry.

Yes, and if it's important that different sessions are processed by
different threads then there needs to be some unique id that can be
used to distinguish the sessions.

>  Thanks again for your advice,
>
>    -- Oren
>
>
>  On Sat, Mar 1, 2008 at 1:05 PM, sebb <[EMAIL PROTECTED]> wrote:
>
>  > On 29/02/2008, Oren Benjamin <[EMAIL PROTECTED]> wrote:
>  > > For the purposes of stress testing and troubleshooting a large scale
>  > >  multi-tiered web application, we are looking to build a test
>  > environment in
>  > >  which we can recreate any scenario that occurs in production with very
>  > high
>  > >  fidelity.  To that end, we plan to "replay" the production server
>  > access
>  > >  logs by sending identical requests to the test environment at the same
>  > >  points in time (relative to the start of the test).  Furthermore, we
>  > would
>  > >  like the requests to be sent from multiple threads to simulate multiple
>  > >  users making the requests asynchronously.
>  > >
>  > >  I'm looking into the use of JMeter for our purposes.  From what I see
>  > it
>  > >  provides great reporting facilities, test abstractions, and a flexible
>  > >  architecture.  The Access Log Sampler seems like the logical starting
>  > point,
>  > >  but from reading the documentation and other threads on the mailing
>  > list, it
>  > >  looks like we're going to have to extend it quite a bit to meet or
>  > needs.
>  > >
>  > >  Here are the issues I've found so far:
>  > >
>  > >  1) Access Log Sampler ignores the time stamp of the log entries and the
>  > >  Timer mechanism is designed for "delays" as opposed to "alarms."  This
>  > would
>  > >  make it difficult to accurately simulate the timing of the requests in
>  > the
>  > >  log.
>  >
>  > This should be easy enough to implement.
>  >
>  > The sampler would need to keep track of the timestamps for each
>  > request it issues in a thread, and add a time delay as necessary.
>  >
>  > There would probably need to be some way of reporting if the desired
>  > start time has been exceeded.
>  >
>  > Also need to decide if the wait times should be relative to the
>  > previous sample or the start of the test. Or maybe make that optional?
>  >
>  > >  2) According to the ThreadGroup documentation, " each thread will
>  > execute
>  > >  the test plan in its entirety."  This is not we want, since we are
>  > looking
>  > >  to distribute the requests in the log among multiple threads.
>  >
>  > Yes, that is a problem currently.
>  >
>  > There are two aspects to this:
>  > - does the log contain sufficient information to be able to
>  > distinguish different users?
>  > - how to use this information in JMeter.
>  >
>  > It should be fairly easy to implement a filter to ignore all but one
>  > user/session when replaying a log.
>  >
>  > It gets a lot more tricky when multiple users are involved, as
>  > different users need different threads.
>  >
>  > If the total number of different users is known in advance, then one
>  > can just parcel out the samples to different threads. This may be
>  > sufficient for some cases.
>  >
>  > However, where there are too many different users to have a thread
>  > each, then JMeter needs to know when a given user has finished; it can
>  > then re-use the thread for another user. For a particular application,
>  > it may be possible to specify this, either as a specific entry in the
>  > log, or perhaps as a timeout since the last request.
>  >
>  > >  Any advice regarding whether JMeter is appropriate to this task and
>  > ideas
>  > >  related to the design of the test plan and any necessary
>  > plug-ins/extensions
>  > >  would be greatly appreciated.  From my limited investigation, it seems
>  > that
>  > >  this is a scenario that many groups are looking to implement, but no
>  > >  standard solution has been developed.
>  > >
>  >
>  > Specific use cases - with details of corresponding log file entries -
>  > might help to determine if there are standard patterns that JMeter
>  > could use.
>  >
>  > Perhaps it would be worth setting up a Wiki page with examples?
>  >
>  > >  Your feedback is much appreciated,
>  > >
>  > >
>  > >     -- Oren
>  > >
>  >
>
> > ---------------------------------------------------------------------
>  > To unsubscribe, e-mail: [EMAIL PROTECTED]
>  > For additional commands, e-mail: [EMAIL PROTECTED]
>  >
>  >
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to