>From the article:
java -jar stress.jar -d "144 node ids" -e ONE -n 27000000 -l 3 -i 1 -t 200
-p 7102 -o INSERT -c 10 -r

The client is writing 10 columns per row key, row key randomly chosen from
27 million ids, each column has a key and 10 bytes of data. The total on
disk size for each write including all overhead is about 400 bytes.

Note to sure able the batching - it may be one of the parameters to
stress.jar.

Peter

On Mon, Oct 31, 2016 at 4:07 PM, Kant Kodali <k...@peernova.com> wrote:

> Hi Guys,
>
>
> I keep reading the articles below but the biggest questions for me are as
> follows
>
> 1) what is the "data size" per request? without data size it hard for me
> to see anything sensible
> 2) is there batching here?
>
> http://www.datastax.com/1-million-writes
>
> http://techblog.netflix.com/2014/07/revisiting-1-million-
> writes-per-second.html
>
> Thanks!
>
>
>
>

Reply via email to