I had a similar experience the first time.  Turns out that the data I wanted
to test with (HTTP POSTs) has to be put on each remote.  I also had a
process to randomize the data when transferred to the remotes.  I finally
got the load up high enough across 10 machines like yours.

The test harness I had was pretty simple:  post these things to this url.

On Thu, Jun 5, 2008 at 5:19 PM, Michael McDonnell <[EMAIL PROTECTED]>
wrote:

> We're running a distributed test (roughly 7 remote workstations) on a
> pretty
> hefty box (8 cores, 32 gigs ram.... etc...)
>
> However, something seems to be going wrong... perhaps its because I'm
> crossing linux and windows platforms to try to do the testing?
>
> We're load testing a web application, so primarily, the only work we're
> doing is http requests (there are a few "java requests" that actually is an
> app I created to make webservice calls, but we'll get to that later)
>
> However, when we view the transactions in the database, they are extremely
> low. (frighteningly low).
>
> Then we run the test from a single user work station (same test, 300 users
> doing work) and our results come back fantastically!
>
> Now granted: I guess the big deal is this: when the app uses a csv in
> distributed mode, does each slave utilize the the same csv in the same
> order
> ? or is there a sort of "break up" so that no two slaves are using the same
> line in the csv?
>
> I'm sorry for what may be dumb questions... but we're coming down to a
> tight
> deadline, and the distributed testing is not giving us good results where
> as
> the local testing is.
>
> Thanks for all your help in advance.
>
> Michael
>

Reply via email to