Dear rajat,

Yes it is possible, I have the same constraints. However I must warn you,
from what I see Cassandra memory consumption is not bounded in 0.6.X on
debian 64 Bit

Here is an example of an instance launch in a node :

root     19093  0.1 28.3 1210696 *570052* ?      Sl   Jan11   9:08
/usr/bin/java -ea -Xms128M *-Xmx512M *-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Dcom.sun.management.jmxremote.port=8081
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dstorage-config=bin/../conf -Dcassandra-foreground=yes -cp
bin/../conf:bin/../build/classes:bin/../lib/antlr-3.1.3.jar:bin/../lib/apache-cassandra-0.6.6.jar:bin/../lib/avro-1.2.0-dev.jar:bin/../lib/cassandra-javautils.jar:bin/../lib/clhm-production.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-io-1.4.jar:bin/../lib/commons-lang-2.4.jar:bin/../lib/commons-pool-1.5.4.jar:bin/../lib/google-collections-1.0.jar:bin/../lib/hadoop-core-0.20.1.jar:bin/../lib/hector-0.6.0-14.jar:bin/../lib/high-scale-lib.jar:bin/../lib/ivy-2.1.0.jar:bin/../lib/jackson-core-asl-1.4.0.jar:bin/../lib/jackson-mapper-asl-1.4.0.jar:bin/../lib/jline-0.9.94.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-r917130.jar:bin/../lib/log4j-1.2.14.jar:bin/../lib/perf4j-0.9.12.jar:bin/../lib/slf4j-api-1.5.8.jar:bin/../lib/slf4j-log4j12-1.5.8.jar:bin/../lib/uuid-3.1.jar
org.apache.cassandra.thrift.CassandraDaemon

Look at the second bold value, Xmx indicates the maximum memory that
cassandra can use; it is set to be 512, so it could easily fit into 1 Gb.
Now look at the first one, 570Mb > 512 Mb. Moreover if I come back in one
day the first value will be even higher. Probably around 610 Mb. Actually it
increases to the point where I need to restart it otherwise other program
are shut down by Linux for cassandra to further expand its memory usage...

By the way it's a call to other cassandra users, am I the only one to
encounter this problem ?

Best regards,

Victor K.

2011/1/14 Rajat Chopra <rcho...@makara.com>

> Hello.
>
>
>
> According to  JVM heap size topic at
> http://wiki.apache.org/cassandra/MemtableThresholds , Cassandra would need
> atleast 1G of memory to run. Is it possible to have a running Cassandra
> cluster with machines that have less than that memory… say 512M?
>
> I can live with slow transactions, no compactions etc, but do not want an
> OutOfMemory error. The reason for a smaller bound for Cassandra is that I
> want to leave room for other processes to run.
>
>
>
> Please help with specific parameters to tune.
>
>
>
> Thanks,
>
> Rajat
>
>
>

Reply via email to