Hello thanks for reply

2011/3/7 Aaron Morton <aa...@thelastpickle.com>

> It's always possible to run out of memory. Can you provide...
>
> - number cf's and their Memtable settings
>
1 CF with memtable 64MB, other settings as default



> - any row or key cache settings
>
Its stay default e.g 200000, but i don't do any read, only writes



> - any other buffer or memory settings you may have changed in
> Cassandra.yaml.
>
I change only binary_memtable_throughput_in_mb setting, and set it to 64MB




> - what load you are putting on the cluster, e.g. Inserting x rows/columns
> per second with avg size y
>
>
> I do bulk load of real data into cluster. In may case this do in 16 threads
(8 process / per machine, on two machines), with avg insert speed 1000 per
second per thread, so total is 16000 rows per second, with avg size of row
573 bytes

Reply via email to