[ 
https://issues.apache.org/jira/browse/CASSANDRA-13019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971135#comment-16971135
 ] 

maxwellguo commented on CASSANDRA-13019:
----------------------------------------

[~jjirsa] I have just review the code ,and left some comment. Looking forward 
to your feedback. :) 

If we can make a nodetool command , that we can set the rate dynamically. Once 
we want to change the rate of making snapshot or delete file rate , restart the 
node is too expensive . 
I think we can open a new issue for this , for it is a new kind of problem.And 
this issue should be fix after this issue. I have saw you have made two method 
interface in StorageServiceMbean.:)

> Improve clearsnapshot to delete the snapshot files slowly 
> ----------------------------------------------------------
>
>                 Key: CASSANDRA-13019
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13019
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Legacy/Core
>            Reporter: Dikang Gu
>            Assignee: Jeff Jirsa
>            Priority: Normal
>              Labels: pull-request-available
>             Fix For: 4.x
>
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In our environment, we are creating snapshots for backup, after we finish the 
> backup, we are running {{clearsnapshot}} to delete the snapshot files. At 
> that time we may have thousands of files to delete, and it's causing sudden 
> disk usage spike. As a result, we are experiencing a spike of drop messages 
> from Cassandra.
> I think we should implement something like {{slowrm}} to delete the snapshot 
> files slowly, avoid the sudden disk usage spike.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to