[ 
https://issues.apache.org/jira/browse/CASSANDRA-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13127403#comment-13127403
 ] 

Sylvain Lebresne commented on CASSANDRA-2524:
---------------------------------------------

This looks good to me, though one thing that goes away with that is that for cf 
that use the BoundedScanner, we don't cleanup the row cache. This is not a 
really big deal, but it's perfectly possible that following the cleanup, rows 
that are still on the node get evicted from cache before rows that are not in 
the ranges of the node, so it's a slight regression. Maybe we could simply add 
a cache cleanup that iterate through the cache and only keeps keys in range? 
Though one may argue it's on the fringe of this ticket, I'd prefer we add this 
here so we commit something that is an improvement, rather than an improvement 
with a slight regression to be fixed later.
                
> Use SSTableBoundedScanner for cleanup
> -------------------------------------
>
>                 Key: CASSANDRA-2524
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2524
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Stu Hood
>            Assignee: Stu Hood
>            Priority: Minor
>              Labels: lhf
>         Attachments: 
> 0001-Use-a-SSTableBoundedScanner-for-cleanup-and-improve-cl.txt, 
> 0002-Oops.-When-indexes-or-counters-are-in-use-must-continu.txt
>
>
> SSTableBoundedScanner seeks rather than scanning through rows, so it would be 
> significantly more efficient than the existing per-key filtering that cleanup 
> does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to