> Some months ago we deleted about 1.4PB net of CephFS data.  Approximately
> 600TB net of the space we expected to reclaim, did not get reclaimed.

I always have to ask: any chance that there are `rados bench` orphans in there? 
 I've encounter this a few times from runs that didn't properly clean up.

> They feel that it would be somewhat difficult and risky to try to find and
> delete these objects via rados.  However, obviously, 600+ *net* TB of
> all-NVMe storage is quite a lot of money to just let go to waste.   That's
> effectively over 1.1 PB, once you figure EC overhead and the need to not
> fill the cluster over about 70%.

Why 70%?


> So to reclaim the pool I have created a plan to move everything off of the
> pool, and then delete the pool.  Which we are largely needing to do anyway,
> due to splitting to multiple CephFS instances.
> 
> My question is, am I likely to run into problems with this?  For example,
> will I be able to do 'ceph fs rm_data_pool'  once there are no longer any
> objects associated with the CephFS instance on the pool, or will the MDS
> have ghost object records that cause the command to balk?

There is design work for a future ability to migrate a pool transparently, for 
example to effect a new EC profile, but that won't be available anytime soon.

Is the pool in question the first/default CephFS data pool, or one added?
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to