Nodetool clearsnapshot

2015-01-13 Thread Batranut Bogdan
I have read that snapshots are basicaly symlinks and they do not take that much 
space.Why if I run nodetool clearsnapshot it frees a lot of space? I am seeing 
GBs freed...

Re: Nodetool clearsnapshot

2015-01-13 Thread Jan Kesten

Hi,

I have read that snapshots are basicaly symlinks and they do not take 
that much space.
Why if I run nodetool clearsnapshot it frees a lot of space? I am 
seeing GBs freed...


both together makes sense. Creating a snaphot just creates links for all 
files unter the snapshot directory. This is very fast and takes no 
space. But those links are hard links, not symbolic ones.


After a while your running cluster will compact some of its sstables and 
writing it to a new one as deleting the old ones. Now for example you 
had SSTable1..4 and a snapshot with the links to those four after 
compaction you will have one active SSTable5 which is newly written and 
consumes space. The snapshot-linked ones are still there, still 
consuming their space. Only when this snapshot is cleared you get your 
disk space back.


HTH,
Jan




Re: Nodetool clearsnapshot

2015-01-13 Thread Batranut Bogdan
OK Thanks,
But I also read that repair will take a snapshot. Due to the fact that I have 
Replication factor 3 for my keyspace, I run nodetool clearsnapshot to keep disk 
space use to a minimum. Will this impact my repair? 

 On Tuesday, January 13, 2015 4:19 PM, Jan Kesten  
wrote:
   

  Hi,
 
 
  I have read that snapshots are basicaly symlinks and they do not take that 
much space. Why if I run nodetool clearsnapshot it frees a lot of space? I am 
seeing GBs freed...  
 
 both together makes sense. Creating a snaphot just creates links for all files 
unter the snapshot directory. This is very fast and takes no space. But those 
links are hard links, not symbolic ones. 
 
 After a while your running cluster will compact some of its sstables and 
writing it to a new one as deleting the old ones. Now for example you had 
SSTable1..4 and a snapshot with the links to those four after compaction you 
will have one active SSTable5 which is newly written and consumes space. The 
snapshot-linked ones are still there, still consuming their space. Only when 
this snapshot is cleared you get your disk space back. 
 
 HTH,
 Jan
 
 
 



Re: Nodetool clearsnapshot

2015-01-13 Thread Yuki Morishita
Snapshot during repair is automatically cleared if repair succeeds.
Unfortunately, you have to delete it manually if repair failed or stalled.

On Tue, Jan 13, 2015 at 8:30 AM, Batranut Bogdan  wrote:
> OK Thanks,
>
> But I also read that repair will take a snapshot. Due to the fact that I
> have Replication factor 3 for my keyspace, I run nodetool clearsnapshot to
> keep disk space use to a minimum. Will this impact my repair?
>
>
> On Tuesday, January 13, 2015 4:19 PM, Jan Kesten 
> wrote:
>
>
> Hi,
>
> I have read that snapshots are basicaly symlinks and they do not take that
> much space.
> Why if I run nodetool clearsnapshot it frees a lot of space? I am seeing GBs
> freed...
>
>
> both together makes sense. Creating a snaphot just creates links for all
> files unter the snapshot directory. This is very fast and takes no space.
> But those links are hard links, not symbolic ones.
>
> After a while your running cluster will compact some of its sstables and
> writing it to a new one as deleting the old ones. Now for example you had
> SSTable1..4 and a snapshot with the links to those four after compaction you
> will have one active SSTable5 which is newly written and consumes space. The
> snapshot-linked ones are still there, still consuming their space. Only when
> this snapshot is cleared you get your disk space back.
>
> HTH,
> Jan
>
>
>
>



-- 
Yuki Morishita
 t:yukim (http://twitter.com/yukim)


Re: Nodetool clearsnapshot

2015-01-13 Thread Batranut Bogdan
Got it,
Thank you! 

 On Tuesday, January 13, 2015 5:00 PM, Yuki Morishita  
wrote:
   

 Snapshot during repair is automatically cleared if repair succeeds.
Unfortunately, you have to delete it manually if repair failed or stalled.

On Tue, Jan 13, 2015 at 8:30 AM, Batranut Bogdan  wrote:
> OK Thanks,
>
> But I also read that repair will take a snapshot. Due to the fact that I
> have Replication factor 3 for my keyspace, I run nodetool clearsnapshot to
> keep disk space use to a minimum. Will this impact my repair?
>
>
> On Tuesday, January 13, 2015 4:19 PM, Jan Kesten 
> wrote:
>
>
> Hi,
>
> I have read that snapshots are basicaly symlinks and they do not take that
> much space.
> Why if I run nodetool clearsnapshot it frees a lot of space? I am seeing GBs
> freed...
>
>
> both together makes sense. Creating a snaphot just creates links for all
> files unter the snapshot directory. This is very fast and takes no space.
> But those links are hard links, not symbolic ones.
>
> After a while your running cluster will compact some of its sstables and
> writing it to a new one as deleting the old ones. Now for example you had
> SSTable1..4 and a snapshot with the links to those four after compaction you
> will have one active SSTable5 which is newly written and consumes space. The
> snapshot-linked ones are still there, still consuming their space. Only when
> this snapshot is cleared you get your disk space back.
>
> HTH,
> Jan
>
>
>
>



-- 
Yuki Morishita
 t:yukim (http://twitter.com/yukim)




Issue with nodetool clearsnapshot

2012-03-05 Thread B R
Version 0.8.9

We run a 2 node cluster with RF=2. We ran a scrub and after that ran the
clearsnapshot to remove the backup snapshot created by scrub. It seems that
instead of removing the snapshot, clearsnapshot moved the data files from
the snapshot directory to the parent directory and the size of the data for
that keyspace has doubled. Many of the files are looking like duplicates.

in Keyspace1 directory
156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
118211555728 Jan 31 12:50 Standard1-g-7968-Data.db
118211555728 Mar  3 22:58 Standard1-g-8840-Data.db
116902342895 Feb 25 02:04 Standard1-g-8832-Data.db
116902342895 Mar  3 22:10 Standard1-g-8836-Data.db
93788425710 Feb 21 04:20 Standard1-g-8791-Data.db
93788425710 Mar  4 00:29 Standard1-g-8845-Data.db
.

Even though the nodetool ring command shows the correct data size for the
node, the du -sh on the keyspace directory gives double the size.

Can you guide us to proceed from this situation ?

Thanks.


nodetool clearsnapshot -t 1537185517560-rmsharesducc

2018-09-27 Thread Lou DeGenaro
Hello,

I issue the subject command running Cassandra 2.2.12 and get this response:

Requested clearing snapshot(s) for [all keyspaces] with snapshot name
[1537185517560-rmsharesducc]

But the snapshot does not go away.

degenaro@myhost1:~>
/users1/degenaro/svn/apache/ducc/workspace-trunk/cassandra-2.2.12/apache-cassandra-2.2.12/bin/nodetool
listsnapshotsSnapshot Details:
Snapshot name   Keyspace nameColumn family
name   True size  Size on disk
1537184548695-rmloadducc
rmload   0 bytes13 bytes
1537184548657-rmsharesducc
rmshares 0 bytes13 bytes
1537184548620-rmnodesducc
rmnodes  0 bytes13 bytes
1537185517479-rmnodesducc
rmnodes  6.06 KB20.36 KB
1537185517617-rmloadducc
rmload   4.79 KB4.82 KB
1537174548695-rmloadducc
rmload   4.79 KB4.82 KB
1537185517560-rmsharesducc
rmshares 4.9 KB 4.93 KB
1537115517479-rmnodesducc
rmnodes  6.06 KB20.36 KB

Have tried other snapshots with same disappointing result.  What am I doing
wrong, please?

Thanks.

Lou.


Re: Issue with nodetool clearsnapshot

2012-03-05 Thread aaron morton
> It seems that instead of removing the snapshot, clearsnapshot moved the data 
> files from the snapshot directory to the parent directory and the size of the 
> data for that keyspace has doubled.
That is not possible, there is only code there to delete a files in the 
snapshot. 

Note that in the snapshot are hard links to the files in the data dir. Deleting 
/ clearing the snapshot will not delete the files from the data dir if they are 
still in use. 

>  Many of the files are looking like duplicates.
> 
> in Keyspace1 directory
> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
Under 0.8.x files are not immediately deleted. Did the data directory contain 
zero size -Compacted files with the same number ?
  
Cheers


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 5/03/2012, at 11:50 PM, B R wrote:

> Version 0.8.9
> 
> We run a 2 node cluster with RF=2. We ran a scrub and after that ran the 
> clearsnapshot to remove the backup snapshot created by scrub. It seems that 
> instead of removing the snapshot, clearsnapshot moved the data files from the 
> snapshot directory to the parent directory and the size of the data for that 
> keyspace has doubled. Many of the files are looking like duplicates.
> 
> in Keyspace1 directory
> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
> 118211555728 Jan 31 12:50 Standard1-g-7968-Data.db
> 118211555728 Mar  3 22:58 Standard1-g-8840-Data.db
> 116902342895 Feb 25 02:04 Standard1-g-8832-Data.db
> 116902342895 Mar  3 22:10 Standard1-g-8836-Data.db
> 93788425710 Feb 21 04:20 Standard1-g-8791-Data.db
> 93788425710 Mar  4 00:29 Standard1-g-8845-Data.db
> .
> 
> Even though the nodetool ring command shows the correct data size for the 
> node, the du -sh on the keyspace directory gives double the size.
> 
> Can you guide us to proceed from this situation ?
> 
> Thanks.



Re: Issue with nodetool clearsnapshot

2012-03-05 Thread B R
Hi Aaron,

1)Since you mentioned hard links, I would like to add that our data
directory itself is a sym-link. Could that be causing an issue ?

2)Yes, there are 0 byte files of the same numbers
in Keyspace1 directory
0 Mar  4 01:33 Standard1-g-7317-Compacted
0 Mar  3 22:58 Standard1-g-7968-Compacted
0 Mar  3 23:10 Standard1-g-8778-Compacted
0 Mar  3 23:47 Standard1-g-8782-Compacted
...

I restarted the node and it went about deleting the files and the disk
space has been released. Can this be done using nodetool, and without
restarting ?

Thanks.

On Mon, Mar 5, 2012 at 10:59 PM, aaron morton wrote:

>   It seems that instead of removing the snapshot, clearsnapshot moved the
> data files from the snapshot directory to the parent directory and the size
> of the data for that keyspace has doubled.
>
> That is not possible, there is only code there to delete a files in the
> snapshot.
>
> Note that in the snapshot are hard links to the files in the data dir.
> Deleting / clearing the snapshot will not delete the files from the data
> dir if they are still in use.
>
>  Many of the files are looking like duplicates.
>
> in Keyspace1 directory
> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
>
> Under 0.8.x files are not immediately deleted. Did the data directory
> contain zero size -Compacted files with the same number ?
>
> Cheers
>
>
>   -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 5/03/2012, at 11:50 PM, B R wrote:
>
> Version 0.8.9
>
> We run a 2 node cluster with RF=2. We ran a scrub and after that ran the
> clearsnapshot to remove the backup snapshot created by scrub. It seems that
> instead of removing the snapshot, clearsnapshot moved the data files from
> the snapshot directory to the parent directory and the size of the data for
> that keyspace has doubled. Many of the files are looking like duplicates.
>
> in Keyspace1 directory
> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
> 118211555728 Jan 31 12:50 Standard1-g-7968-Data.db
> 118211555728 Mar  3 22:58 Standard1-g-8840-Data.db
> 116902342895 Feb 25 02:04 Standard1-g-8832-Data.db
> 116902342895 Mar  3 22:10 Standard1-g-8836-Data.db
> 93788425710 Feb 21 04:20 Standard1-g-8791-Data.db
> 93788425710 Mar  4 00:29 Standard1-g-8845-Data.db
> .
>
> Even though the nodetool ring command shows the correct data size for the
> node, the du -sh on the keyspace directory gives double the size.
>
> Can you guide us to proceed from this situation ?
>
> Thanks.
>
>
>


Re: Issue with nodetool clearsnapshot

2012-03-06 Thread aaron morton
> 1)Since you mentioned hard links, I would like to add that our data directory 
> itself is a sym-link. Could that be causing an issue ?
Seems unlikely. 

> I restarted the node and it went about deleting the files and the disk space 
> has been released. Can this be done using nodetool, and without restarting ?

Under 0.8.x they are deleted when the files are no longer in use and when JVM 
GC free's all references. You can provoke this by getting the JVM to GC using 
JConsole or another JMX client. 

If there is not enough disk free space GC is forced and free space reclaimed.  

Under 1.x file handles are counted and the files are quickly deleted. 

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 6/03/2012, at 7:38 AM, B R wrote:

> Hi Aaron,
> 
> 1)Since you mentioned hard links, I would like to add that our data directory 
> itself is a sym-link. Could that be causing an issue ?
> 
> 2)Yes, there are 0 byte files of the same numbers
> in Keyspace1 directory
> 0 Mar  4 01:33 Standard1-g-7317-Compacted
> 0 Mar  3 22:58 Standard1-g-7968-Compacted
> 0 Mar  3 23:10 Standard1-g-8778-Compacted
> 0 Mar  3 23:47 Standard1-g-8782-Compacted
> ...
> 
> I restarted the node and it went about deleting the files and the disk space 
> has been released. Can this be done using nodetool, and without restarting ?
> 
> Thanks.
> 
> On Mon, Mar 5, 2012 at 10:59 PM, aaron morton  wrote:
>> It seems that instead of removing the snapshot, clearsnapshot moved the data 
>> files from the snapshot directory to the parent directory and the size of 
>> the data for that keyspace has doubled.
> That is not possible, there is only code there to delete a files in the 
> snapshot. 
> 
> Note that in the snapshot are hard links to the files in the data dir. 
> Deleting / clearing the snapshot will not delete the files from the data dir 
> if they are still in use. 
> 
>>  Many of the files are looking like duplicates.
>> 
>> in Keyspace1 directory
>> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
>> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
> Under 0.8.x files are not immediately deleted. Did the data directory contain 
> zero size -Compacted files with the same number ?
>   
> Cheers
> 
> 
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 5/03/2012, at 11:50 PM, B R wrote:
> 
>> Version 0.8.9
>> 
>> We run a 2 node cluster with RF=2. We ran a scrub and after that ran the 
>> clearsnapshot to remove the backup snapshot created by scrub. It seems that 
>> instead of removing the snapshot, clearsnapshot moved the data files from 
>> the snapshot directory to the parent directory and the size of the data for 
>> that keyspace has doubled. Many of the files are looking like duplicates.
>> 
>> in Keyspace1 directory
>> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
>> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
>> 118211555728 Jan 31 12:50 Standard1-g-7968-Data.db
>> 118211555728 Mar  3 22:58 Standard1-g-8840-Data.db
>> 116902342895 Feb 25 02:04 Standard1-g-8832-Data.db
>> 116902342895 Mar  3 22:10 Standard1-g-8836-Data.db
>> 93788425710 Feb 21 04:20 Standard1-g-8791-Data.db
>> 93788425710 Mar  4 00:29 Standard1-g-8845-Data.db
>> .
>> 
>> Even though the nodetool ring command shows the correct data size for the 
>> node, the du -sh on the keyspace directory gives double the size.
>> 
>> Can you guide us to proceed from this situation ?
>> 
>> Thanks.
> 
> 



Re: Issue with nodetool clearsnapshot

2012-03-06 Thread B R
Thanks a lot, Aaron. Our cluster is much stable now. We'll look at
upgrading to 1.x in the coming weeks.

On Tue, Mar 6, 2012 at 2:33 PM, aaron morton wrote:

> 1)Since you mentioned hard links, I would like to add that our data
> directory itself is a sym-link. Could that be causing an issue ?
>
> Seems unlikely.
>
> I restarted the node and it went about deleting the files and the disk
> space has been released. Can this be done using nodetool, and without
> restarting ?
>
> Under 0.8.x they are deleted when the files are no longer in use and when
> JVM GC free's all references. You can provoke this by getting the JVM to GC
> using JConsole or another JMX client.
>
> If there is not enough disk free space GC is forced and free space
> reclaimed.
>
> Under 1.x file handles are counted and the files are quickly deleted.
>
> Cheers
>
>   -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 6/03/2012, at 7:38 AM, B R wrote:
>
> Hi Aaron,
>
> 1)Since you mentioned hard links, I would like to add that our data
> directory itself is a sym-link. Could that be causing an issue ?
>
> 2)Yes, there are 0 byte files of the same numbers
> in Keyspace1 directory
> 0 Mar  4 01:33 Standard1-g-7317-Compacted
> 0 Mar  3 22:58 Standard1-g-7968-Compacted
> 0 Mar  3 23:10 Standard1-g-8778-Compacted
> 0 Mar  3 23:47 Standard1-g-8782-Compacted
> ...
>
> I restarted the node and it went about deleting the files and the disk
> space has been released. Can this be done using nodetool, and without
> restarting ?
>
> Thanks.
>
> On Mon, Mar 5, 2012 at 10:59 PM, aaron morton wrote:
>
>>   It seems that instead of removing the snapshot, clearsnapshot moved
>> the data files from the snapshot directory to the parent directory and the
>> size of the data for that keyspace has doubled.
>>
>> That is not possible, there is only code there to delete a files in the
>> snapshot.
>>
>> Note that in the snapshot are hard links to the files in the data dir.
>> Deleting / clearing the snapshot will not delete the files from the data
>> dir if they are still in use.
>>
>>  Many of the files are looking like duplicates.
>>
>> in Keyspace1 directory
>> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
>> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
>>
>> Under 0.8.x files are not immediately deleted. Did the data directory
>> contain zero size -Compacted files with the same number ?
>>
>> Cheers
>>
>>
>>   -
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 5/03/2012, at 11:50 PM, B R wrote:
>>
>> Version 0.8.9
>>
>> We run a 2 node cluster with RF=2. We ran a scrub and after that ran the
>> clearsnapshot to remove the backup snapshot created by scrub. It seems that
>> instead of removing the snapshot, clearsnapshot moved the data files from
>> the snapshot directory to the parent directory and the size of the data for
>> that keyspace has doubled. Many of the files are looking like duplicates.
>>
>> in Keyspace1 directory
>> 156987786084 Jan 21 03:18 Standard1-g-7317-Data.db
>> 156987786084 Mar  4 01:33 Standard1-g-8850-Data.db
>> 118211555728 Jan 31 12:50 Standard1-g-7968-Data.db
>> 118211555728 Mar  3 22:58 Standard1-g-8840-Data.db
>> 116902342895 Feb 25 02:04 Standard1-g-8832-Data.db
>> 116902342895 Mar  3 22:10 Standard1-g-8836-Data.db
>> 93788425710 Feb 21 04:20 Standard1-g-8791-Data.db
>> 93788425710 Mar  4 00:29 Standard1-g-8845-Data.db
>> .
>>
>> Even though the nodetool ring command shows the correct data size for the
>> node, the du -sh on the keyspace directory gives double the size.
>>
>> Can you guide us to proceed from this situation ?
>>
>> Thanks.
>>
>>
>>
>
>


Nodetool clearsnapshot doesn't support Column Families

2016-05-17 Thread Anubhav Kale
Hello,

I noticed that clearsnapshot doesn't support removing snapshots on a per CF, 
like how snapshots lets you take it per CF.

http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsClearSnapShot.html

I couldn't find a JIRA to address this. Is this intentional ? If so, I am 
curious to understand the rationale.

In absence of this, I would just rm -rf the folder to suit my requirements. Are 
there any bad-effects of doing so ?

Thanks !


Re: Nodetool clearsnapshot doesn't support Column Families

2016-05-18 Thread Dominik Keil
Hi,

Here's Cassandra's JIRA: https://issues.apache.org/jira/browse/CASSANDRA

As for your question: Yes, you can just rm -r a snapshot folder...
nothing bad will happen, except the deletion of that snapshot, obviously :-)

Regards

Am 17.05.2016 um 18:42 schrieb Anubhav Kale:
>
> Hello,
>
>  
>
> I noticed that clearsnapshot doesn’t support removing snapshots on a
> per CF, like how snapshots lets you take it per CF.
>
>  
>
> http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsClearSnapShot.html
>
>  
>
> I couldn’t find a JIRA to address this. Is this intentional ? If so, I
> am curious to understand the rationale.
>
>  
>
> In absence of this, I would just rm –rf the folder to suit my
> requirements. Are there any bad-effects of doing so ?
>
>  
>
> Thanks !
>

-- 
*Dominik Keil*
Phone: + 49 (0) 621 150 207 31
Mobile: + 49 (0) 151 626 602 14

Movilizer GmbH
Julius-Hatry-Strasse 1
68163 Mannheim
Germany

-- 
movilizer.com

[image: Visit company website] 
*Reinvent Your Mobile Enterprise*




*Be the first to know:*
Twitter  | LinkedIn 
 | Facebook 
 | stack overflow 


Company's registered office: Mannheim HRB: 700323 / Country Court: Mannheim 
Managing Directors: Alberto Zamora, Jörg Bernauer, Oliver Lesche Please 
inform us immediately if this e-mail and/or any attachment was transmitted 
incompletely or was not intelligible.

This e-mail and any attachment is for authorized use by the intended 
recipient(s) only. It may contain proprietary material, confidential 
information and/or be subject to legal privilege. It should not be 
copied, disclosed to, retained or used by any other party. If you are not 
an intended recipient then please promptly delete this e-mail and any 
attachment and all copies and inform the sender.


Nodetool clearsnapshot does not delete snapshot for dropped column_family

2020-04-30 Thread Sergio Bilello
Hi guys!
I am running cassandra 3.11.4. I dropped a column_family but I am able to see 
the disk space occupied by that column_family in the disk. I understood that 
since I have the auto_snapshot flag = true this behavior is expected.
However, I would like to avoid to write a dummy script that removes the 
column_family folder for each node.
I tried the nodetool clearsnapshot command but it didn't work and when I try to 
nodetool listsnapshots I don't see anything. It is like hidden that space 
occupied.

Any suggestion?

Thanks,

Sergio

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Nodetool clearsnapshot does not delete snapshot for dropped column_family

2020-04-30 Thread Erick Ramirez
Yes, you're right. It doesn't show up in listsnapshots nor does
clearsnapshot remove the dropped snapshot because the table is no longer
managed by C* (because it got dropped). So you will need to manually remove
the dropped-* directories from the filesystem.

Someone here will either correct me or hopefully provide a user-friendlier
solution. Cheers!


Re: Nodetool clearsnapshot does not delete snapshot for dropped column_family

2020-04-30 Thread Nitan Kainth
I don't think it works like that. clearsnapshot --all would remove all
snapshots. Here is an example:

$ ls -l
/ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/

total 8

drwxr-xr-x 2 cassandra cassandra 4096 Apr 30 23:17 dropped-1588288650821-a

drwxr-xr-x 2 cassandra cassandra 4096 Apr 30 23:17 manual

$ nodetool clearsnapshot --all

Requested clearing snapshot(s) for [all keyspaces] with [all snapshots]

$ ls -l
/ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/

ls: cannot access
/ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/: No
such file or directory

$


On Thu, Apr 30, 2020 at 5:44 PM Erick Ramirez 
wrote:

> Yes, you're right. It doesn't show up in listsnapshots nor does
> clearsnapshot remove the dropped snapshot because the table is no longer
> managed by C* (because it got dropped). So you will need to manually remove
> the dropped-* directories from the filesystem.
>
> Someone here will either correct me or hopefully provide a user-friendlier
> solution. Cheers!
>


Re: Nodetool clearsnapshot does not delete snapshot for dropped column_family

2020-04-30 Thread Sergio
The problem is that folder is not under snapshot but it is under the data
path.
I tried with the --all switch too
Thanks,
Sergio

On Thu, Apr 30, 2020, 4:21 PM Nitan Kainth  wrote:

> I don't think it works like that. clearsnapshot --all would remove all
> snapshots. Here is an example:
>
> $ ls -l
> /ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/
>
> total 8
>
> drwxr-xr-x 2 cassandra cassandra 4096 Apr 30 23:17 dropped-1588288650821-a
>
> drwxr-xr-x 2 cassandra cassandra 4096 Apr 30 23:17 manual
>
> $ nodetool clearsnapshot --all
>
> Requested clearing snapshot(s) for [all keyspaces] with [all snapshots]
>
> $ ls -l
> /ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/
>
> ls: cannot access
> /ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/:
> No such file or directory
>
> $
>
>
> On Thu, Apr 30, 2020 at 5:44 PM Erick Ramirez 
> wrote:
>
>> Yes, you're right. It doesn't show up in listsnapshots nor does
>> clearsnapshot remove the dropped snapshot because the table is no longer
>> managed by C* (because it got dropped). So you will need to manually remove
>> the dropped-* directories from the filesystem.
>>
>> Someone here will either correct me or hopefully provide a
>> user-friendlier solution. Cheers!
>>
>


Re: Nodetool clearsnapshot does not delete snapshot for dropped column_family

2020-04-30 Thread Sebastian Estevez
Perhaps you had a DDL collision and ended up with two data dirs for the
table?

In that case running drop table would only move the active table directory
to snapshots and as Eric suggested would leave the data in the duplicate
directory "orphaned".

I haven't tried to reproduce this yet but I think given how DDL works in c*
it checks out as a possible scenario.

Keep calm and Cassandra on folks,

Seb


On Thu, Apr 30, 2020, 7:46 PM Sergio  wrote:

> The problem is that folder is not under snapshot but it is under the data
> path.
> I tried with the --all switch too
> Thanks,
> Sergio
>
> On Thu, Apr 30, 2020, 4:21 PM Nitan Kainth  wrote:
>
>> I don't think it works like that. clearsnapshot --all would remove all
>> snapshots. Here is an example:
>>
>> $ ls -l
>> /ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/
>>
>> total 8
>>
>> drwxr-xr-x 2 cassandra cassandra 4096 Apr 30 23:17
>> dropped-1588288650821-a
>>
>> drwxr-xr-x 2 cassandra cassandra 4096 Apr 30 23:17 manual
>>
>> $ nodetool clearsnapshot --all
>>
>> Requested clearing snapshot(s) for [all keyspaces] with [all snapshots]
>>
>> $ ls -l
>> /ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/
>>
>> ls: cannot access
>> /ss/xx/cassandra/data/ww/a-5bf825428b3811eabe0c6b7631a60bb0/snapshots/:
>> No such file or directory
>>
>> $
>>
>>
>> On Thu, Apr 30, 2020 at 5:44 PM Erick Ramirez 
>> wrote:
>>
>>> Yes, you're right. It doesn't show up in listsnapshots nor does
>>> clearsnapshot remove the dropped snapshot because the table is no
>>> longer managed by C* (because it got dropped). So you will need to manually
>>> remove the dropped-* directories from the filesystem.
>>>
>>> Someone here will either correct me or hopefully provide a
>>> user-friendlier solution. Cheers!
>>>
>>