Hello,

  What version of Cassandra are you running?

  If it's 2.0, we recently experienced something similar with 8447 [1],
which 8485 [2] should hopefully resolve.

  Please note that 8447 is not related to tombstones.  Tombstone processing
can put a lot of pressure on the heap as well. Why do you think you have a
lot of tombstones in that one particular table?

  [1] https://issues.apache.org/jira/browse/CASSANDRA-8447
  [2] https://issues.apache.org/jira/browse/CASSANDRA-8485

Jonathan

[image: datastax_logo.png]

Jonathan Lacefield

Solution Architect | (404) 822 3487 | jlacefi...@datastax.com

[image: linkedin.png] <http://www.linkedin.com/in/jlacefield/> [image:
facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
<https://twitter.com/datastax> [image: g+.png]
<https://plus.google.com/+Datastax/about>
<http://feeds.feedburner.com/datastax> <https://github.com/datastax/>

On Tue, Dec 16, 2014 at 2:04 PM, Arne Claassen <a...@emotient.com> wrote:
>
> I have a three node cluster that has been sitting at a load of 4 (for each
> node), 100% CPI utilization (although 92% nice) for that last 12 hours,
> ever since some significant writes finished. I'm trying to determine what
> tuning I should be doing to get it out of this state. The debug log is just
> an endless series of:
>
> DEBUG [ScheduledTasks:1] 2014-12-16 19:03:35,042 GCInspector.java (line
> 118) GC for ParNew: 166 ms for 10 collections, 4400928736 used; max is
> 8000634880
> DEBUG [ScheduledTasks:1] 2014-12-16 19:03:36,043 GCInspector.java (line
> 118) GC for ParNew: 165 ms for 10 collections, 4440011176 used; max is
> 8000634880
> DEBUG [ScheduledTasks:1] 2014-12-16 19:03:37,043 GCInspector.java (line
> 118) GC for ParNew: 135 ms for 8 collections, 4402220568 used; max is
> 8000634880
>
> iostat shows virtually no I/O.
>
> Compaction may enter into this, but i don't really know what to make of
> compaction stats since they never change:
>
> [root@cassandra-37919c3a ~]# nodetool compactionstats
> pending tasks: 10
>           compaction type        keyspace           table       completed
>           total      unit  progress
>                Compaction           mediamedia_tracks_raw       271651482
>       563615497     bytes    48.20%
>                Compaction           mediamedia_tracks_raw        30308910
>     21676695677     bytes     0.14%
>                Compaction           mediamedia_tracks_raw      1198384080
>      1815603161     bytes    66.00%
> Active compaction remaining time :   0h22m24s
>
> 5 minutes later:
>
> [root@cassandra-37919c3a ~]# nodetool compactionstats
> pending tasks: 9
>           compaction type        keyspace           table       completed
>           total      unit  progress
>                Compaction           mediamedia_tracks_raw       271651482
>       563615497     bytes    48.20%
>                Compaction           mediamedia_tracks_raw        30308910
>     21676695677     bytes     0.14%
>                Compaction           mediamedia_tracks_raw      1198384080
>      1815603161     bytes    66.00%
> Active compaction remaining time :   0h22m24s
>
> Sure the pending tasks went down by one, but the rest is identical.
> media_tracks_raw likely has a bunch of tombstones (can't figure out how to
> get stats on that).
>
> Is this behavior something that indicates that i need more Heap, larger
> new generation? Should I be manually running compaction on tables with lots
> of tombstones?
>
> Any suggestions or places to educate myself better on performance tuning
> would be appreciated.
>
> arne
>

Reply via email to