Not completely related. just fyi.

I like it better to see the start time, end time, duration of each
execution in each thread. And then do the aggregation (avg,max,min)
myself.

I modified last few lines of the Inserter function as follows:
endtime = time.time()
            self.latencies[self.idx] += endtime - start
            self.opcounts[self.idx] += 1
            self.keycounts[self.idx] += 1
            open('log'+str(self.idx)+'.txt','a').write(str(endtime-start)
+ ' ' + str(self.idx) + ' ' + str(i) + ' ' + str(time.asctime())+ ' '
+ str(start) + ' ' + str(endtime) + '\n')

You need to understand little bit of python to plug this properly in stress.py.

Above creates lot of log*.txt files. One for each thread.
Each line in these log files have the duration,
thread#,key,timestamp,starttime,endtime separated by space.

i then load these log files to a database and do aggregations as I need.

Remember to remove the old log files on rerun. The above will append
to existing log files.

Just a fyi. Most will not need this.


On Mon, Mar 21, 2011 at 12:40 PM, Ryan King <r...@twitter.com> wrote:
> On Mon, Mar 21, 2011 at 9:34 AM, pob <peterob...@gmail.com> wrote:
>> You mean,
>> more threads in stress.py? The purpose was figure out whats the
>> biggest bandwidth that C* can use.
>
> You should try more threads, but at some point you'll hit diminishing
> returns there. You many need to drive load from more than one host.
> Either way, you need to find out what the bottleneck is.
>
> -ryan
>

Reply via email to