Bring back RowWarningThresholdInMB and set it low
-------------------------------------------------

                 Key: CASSANDRA-1426
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1426
             Project: Cassandra
          Issue Type: New Feature
    Affects Versions: 0.7 beta 1
            Reporter: Edward Capriolo


The problem with big rows in 6.0 and 7.0 is they tend to cause OOM with row 
cache and other memory problems. CFStats shows us the MaximumSizedRow but it 
does not show which row this is. Applications that have to scan all the data on 
a node to turn up a big row are intensive and while they are running they lower 
cache hit rate significantly.

Even though Cassandra 7.0 can accommodate larger rows then 6.X, most use cases 
would never have rows that go over 2 MB.

Please consider bringing this feature back and setting it low. 
<RowWarningThresholdInMB>10</RowWarningThresholdInMB>. With this admins can 
monitor logs and point out large rows before they get out of hand and cause 
mysterious crashes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to