Hey Peter,

On 14 December 2010 20:19, Peter Schuller <peter.schul...@infidyne.com> wrote:
> So, now that I get that we have two different cases ;)

 Yup.  My problem is java / environment based, occurring
 after several weeks, using 0.6.6 instances.  Our thread
 hijacking friend / learned colleague's problem is 0.7 based
 and occurs within a few minutes, and appears to be related
 to something within the JVM.  (And no .. in this case, it's
 not someone that works with me, but an understandable
 assumption given the context.)

 Flipping to standard IO mode .. might be too big a cost for
 us at the moment.  This particular system is in production,
 and it's probably too expensive to set up a parallel system,
 and run it long enough to determine if we see the same
 trends.  I'm increasingly inclined to hang out for a stable
 0.7.x release before fiddling with our production system.


> Re-reading your part of this thread, it seems you're not even having
> OOM issues in Cassandra itself, right? You're just making (1) the
> observation that memory graphs show little left, and (2) that nodetool
> etc start failing on start with OOM?

 Pretty much, yes.

 I've got two new graphs that may be of casual interest.  The first
 shows memory usage over the past two months or so, including
 the recent Cassandra restart a week ago:

 http://www.imagebam.com/image/aed254111011473

 The other graph (see below) might be more insightful.


> (2) Enable GC logging with e.g. -Xloggc:/path/to/log and -XX:+PrintGC
> -XX:+PrintGCDetails -XX:+PrintGCTimestamps.

 Perhaps unrelated - though I don't suspect it was a GC problem
 (having suffered several weeks of GC pain) - Cassandra seemed
 to be happy to still do actual reads & writes while it was in this
 state.  This state meaning


> (3) Observe top to confirm JVM memory use.

 I can confirm this was showing as 7 / 4.1  for V / RES across
 all 16 boxes .. up until we did the rolling restart Cassandra.

> (4) Observe the +/- buffers in 'free' output. For example:

 Okay - here's the output of one of our 16 boxes right now:

r...@prawn-07:~# free
             total       used       free     shared    buffers     cached
Mem:       7864548    7813220      51328          0      77064    2683440
-/+ buffers/cache:    5052716    2811832
Swap:            0          0          0

 Apologies for the lack of granularity in this graph, but the
 numbers at the bottom will indicate what matches what
 fairly easily.   Memory usage graph for past hour:

 http://www.imagebam.com/image/b06921111011462

 This is more for your interest than anything else - I'm
 not sure there's a 'solution' for the problem as it stands,
 and I'm aware that I'm on a 2-rev old release that's about
 to be major-numbered into obscurity in the next month or
 two anyway.


> One hypothesis: Is your data set slowly increasing, and the growth in
> memory use over time is just a result of the data set  being larger,
> which would be reflected in the amount of data in page cache?

 Our data is increasing, certainly.  Much of that are overwrites, and
 we purge those after three days via GCGraceSeconds, but I don't
 have any stats on ratio or indeed raw growth rate.

> If you
> have no *other* significant activity that fills page cache (including
> log files etc), page cache would be unused otherwise and then grow in
> occupancy as your data set increases.

 Just checked, and no - /var/log is a modest 14MB, and we run nothing
 but Cassandra on these boxes.  Well, a monitoring agent, but it doesn't
 do any local logging, just some network activity.

> And to be clear: What is the actual negative observation that you have
> made - do you see high latencies, saturation on disk I/O etc or is
> this purely about seeming memory growth on graphs and observations of
> memory usage by the JVM in top?

 Ah, okay, sorry.  To reiterate:
 o  nodetool ceases to run - won't even let me drain that instance
 o  jmap won't run either
 o  a consequence of nodetool not working means that we can't monitor
     the health of our cluster, which is a scary place to be for us
 o  all 16 boxes were in the same state last Friday, as they'd all
     restarted around the same time (say 6 weeks earlier)
 o  performance on boxes approaching this state seems to be 'ok',
     but I can't give anything quantitative on this front (we don't do
     any performance monitoring calls at the moment)

 My big concern here is that because these are swapless boxes, and
 actual resident size of the java process matched (to within some tens
 of MB) the physical memory, I was very close to having all the boxes
 OOM'ing - at an OS level, I mean, not within the JVM.

 I thought (hoped?) that because we're on 7GB machines, using
 4GB Xmx, and running 0.6.6 instances for several weeks - this
 would be a common enough scenario that others might have seen
 something similar.  Alas . . .    ;)

 j.

Reply via email to