[ 
https://issues.apache.org/jira/browse/CASSANDRA-1426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1426.
---------------------------------------

    Resolution: Not A Problem

as mentioned on the ML, there is a similar warning logged in 0.7 for rows that 
exceed the in-memory compaction threshold:
            logger.info(String.format("Compacting large row %s (%d bytes) 
incrementally",
                                      
FBUtilities.bytesToHex(rows.get(0).getKey().key), rowSize));

(Definitely in favor of a smarter cache, too, in principle.)

> Bring back RowWarningThresholdInMB and set it low
> -------------------------------------------------
>
>                 Key: CASSANDRA-1426
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1426
>             Project: Cassandra
>          Issue Type: New Feature
>    Affects Versions: 0.7 beta 1
>            Reporter: Edward Capriolo
>
> The problem with big rows in 6.0 and 7.0 is they tend to cause OOM with row 
> cache and other memory problems. CFStats shows us the MaximumSizedRow but it 
> does not show which row this is. Applications that have to scan all the data 
> on a node to turn up a big row are intensive and while they are running they 
> lower cache hit rate significantly.
> Even though Cassandra 7.0 can accommodate larger rows then 6.X, most use 
> cases would never have rows that go over 2 MB.
> Please consider bringing this feature back and setting it low. 
> <RowWarningThresholdInMB>10</RowWarningThresholdInMB>. With this admins can 
> monitor logs and point out large rows before they get out of hand and cause 
> mysterious crashes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to