I am using 1.0.5 . The logs suggest that it was one single instance of
failure and I'm unable to reproduce it.
>From the logs, In a span of 30 seconds, heap usage went from 4.8 gb to
8.8 gb With stop-the-world gc running 20 times. I believe that parNew
was unable to clean up memory due to some prob
Not commenting on the GC advice but Cassandra memory usage has improved a lot
since that was written. I would take a look at what was happening and see if
tweeking Cassandra config helped before modifying GC settings.
> "GCInspector.java(line 88): Heap is .9934 full." Is this expected? or
> shou
Looking at http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html
and server logs, I think my situation is this
"The default cassandra settings has the highest peak heap usage. The
problem with this is that it raises the possibility that during the
CMS cycle, a collection of the young generati
To clarify things
Our setup contains of 8 nodes of 32 gb ram...
with a heap_max size of 12gb
and heap new size of 1.6 gb
The load on our nodes is write/read ratio of 10 with 6 main Column Families.
Although the flushes of column families occur every hour with sstables
sizes of around 50-100 mb. T
Hi
My cassandra node went out of heap memory with this message
"GCInspector.java(line 88): Heap is .9934 full." Is this expected? or
should I adjust my flush_largest_memtable_at variable.
Also one change I did in my cluster was add 5 Column Families which are empty
Should empty ColumnFamilies cau