Lots of possible "issues" with high write load and not sure if it means we
need more nodes or if the nodes aren't tuned correctly.

Were using 4 EC2 xlarge instances to support 4 medium instances. We're
getting about 10k inserts/sec, but after about 10 minutes it goes down to
about 7k/sec which seems to time well with these messages:

WARN [MemoryMeter:1] 2013-05-29 21:04:05,462 Memtable.java (line 222)
setting live ratio to minimum of 1.0 instead of 0.08283890630659889


and

WARN [ScheduledTasks:1] 2013-05-29 21:24:07,732 GCInspector.java (line 142)
Heap is 0.7554059480798656 full.  You may need to reduce memtable and/or
cache sizes.  Cassandra will now flush up to the two largest memtables to
free up memory.  Adjust flush_largest_memtables_at threshold in
cassandra.yaml if you don't want Cassandra to do this automatically


Weve insured were using JNA and have left most defaults as defaults. No row
cache, default key/bloom caches.

Also, we start to get timeouts on the clients after about 15 minutes of
hammering.

Were using the latest JNA and separate ephemeral drives for commit log and
data directories.

Were using 1.2.5.

ALso we see compations backing up:

pending tasks: 6
          compaction type        keyspace   column family       completed
        total      unit  progress
               Compaction           svbks           ttevents
691259241      3621003064     bytes    19.09%
               Compaction           svbks           ttevents
135890464       505047776     bytes    26.91%
               Compaction           svbks           ttevents
225229105      2531271538     bytes     8.90%
               Compaction           svbks           ttevents
 1312041410      5409348928     bytes    24.26%
Active compaction remaining time :   0h09m38s


OpsCenter says were only using 20% of the disk. The overall compaction's
remaining time is going up :{

Advise welcome/TIA

--Darren

Reply via email to