Hi Experts
Under massive write load what would be the best value for Cassandra *
flush_largest_memtables_at* setting? Yesterday I got an OOM exception in
one of our production Cassandra node under heavy write load within 5 minute
duration.
I change the above setting value to .45 and also change t
Thanks Peter for the replies.
Previously it was a typing mistake and it should be "getting". I checked
the DC2 (with having replica 0) and noticed that there is no SSTables
created.
I use java hector sample program to insert data to the keyspace. After I
insert a data item, I
1) Login to one of n
Thanks Aaron for the perfect explanation. Decided to go with automatic
compaction. Thanks again.
On Wed, Jan 25, 2012 at 11:19 AM, aaron morton wrote:
> The issue with major / manual compaction is that it creates a one file.
> One big old file.
>
> That one file will not be compacted unless there