xavier manach <xav <at> tekio.org> writes:

> 
> Hi.  I search informations for basic tunning of memory in Cassandra.My
situation :  I started to test larges imports of data in Cassandra 6.1.My first
import worked fine : 100 Millions row in 2 hours ~ around 10000 insert row by
seconds
> My second is slower with the same script in another column family :  ~ around
500 insert row by seconds...I didn't understand why I have a lot of GC for
ConcurrentMarkSweep.[ GC for ConcurrentMarkSweep: 3437 ms, 104971488 reclaimed
leaving 986519328 used; max is 1211170816. ]
> ( The max did'n't move, what is this value 1211170816 ? ) I think the GC
process appear when the insert is slow. The inserts didn't work when the GC
works ?My machine has 66M of RAM, and the processor java only use around 1.8 %
> How Can I optimise the use of memory ?There is a the guideline for best
performances ?Thanks.


You may run out of memory. Cassandra stores some information about those 100M
rows you just inserted in RAM. By default cassandra is configured to take up to
1Gb of RAM. You can configure more memory for cassandra by editing
bin/cassandra.in.sh. Look there for -Xmx1G and change it to your taste.


Reply via email to