[ 
https://issues.apache.org/jira/browse/CASSANDRA-157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-157:
-------------------------------------

    Attachment: 157.patch

Found the (a?) problem.

memtablesPendingFlush used NonBlockingHashSets to store the memtables-to-flush. 
 But NBHS uses a NBHMap under the hood, which when remove() is called, assigns 
a tombstone value to the key instead of actually removing it.  (See 
http://sourceforge.net/tracker/?func=detail&aid=2828100&group_id=194172&atid=948362.)


> make cassandra not allow itself to run out of memory during sustained inserts
> -----------------------------------------------------------------------------
>
>                 Key: CASSANDRA-157
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-157
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: daishi
>            Assignee: Jonathan Ellis
>         Attachments: 157.patch, Cassandra-157_Unregister_Memtable_MBean.diff
>
>
> Tv on IRC pointed out to me that the issue that I've been encountering
> is probably point 2. in this roadmap:
>     
> http://www.mail-archive.com/[email protected]/msg00160.html
> I was unable to find any existing issue for this topic, so I'm creating a new 
> one.
> Since this issue would block our use of Cassandra I'm happy to look into it,
> but if this is a known issue perhaps there's already a plan for addressing it
> that could be clarified?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to