Hi,

I'm using a 2 node cluster in production ( 2 EC2 c1.medium, CL.ONE, RF
= 2, using RP)

1 - I got this kind of message quite often (let's say every 30 seconds) :

WARN [ScheduledTasks:1] 2012-05-15 15:44:53,083 GCInspector.java (line
145) Heap is 0.8081418550931491 full.  You may need to reduce memtable
and/or cache sizes.  Cassandra will now flush up to the two largest
memtables to free up memory.  Adjust flush_largest_memtables_at
threshold in cassandra.yaml if you don't want Cassandra to do this
automatically
 WARN [ScheduledTasks:1] 2012-05-15 15:44:53,084 StorageService.java
(line 2645) Flushing CFS(Keyspace='xxx', ColumnFamily='yyy') to
relieve memory pressure

Is that a problem ?

2 - I shared 2 screenshot the cluster performance (via OpsCenter) and
the hardware metrics (via AWS).

http://img337.imageshack.us/img337/6812/performance.png
http://img256.imageshack.us/img256/9644/aws.png

What do you think of these metrics ? Are frequents compaction normal ?
What about having a 60-70% cpu load for 600 Reads&Writes/sec with this
hardware ? Is there a way to optimize my cluster ?

Here you got the main points of my cassandra.yaml :

flush_largest_memtables_at: 0.75
reduce_cache_sizes_at: 0.85
reduce_cache_capacity_to: 0.6
concurrent_reads: 32
concurrent_writes: 32
commitlog_total_space_in_mb: 4096
rpc_server_type: sync (I am going to switch to hsha, because we are
using ubuntu)
#concurrent_compactors: 1 (commented, so I use default)
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
rpc_timeout_in_ms: 10000

others tuning options (as many of the ones above) are default.

Any advice or comment would be appreciated :).

Alain

Reply via email to