On Thu, Oct 2, 2025 at 9:45 PM Anthony D'Atri <[email protected]> wrote:

> There is design work for a future ability to migrate a pool transparently, 
> for example to effect a new EC profile, but that won't be available anytime 
> soon.

This is, unfortunately, irrelevant in this case. Migrating a pool will
migrate all the objects and their snapshots, even the unwanted ones.
What Trey has (as far as I understood) is that there are some
RADOS-level snapshots that do not correspond to any CephFS-level
snapshots and are thus garbage, not to be migrated.

That's why the talk about file migration and not pool-level operations.

Now to the original question:

> will I be able to do 'ceph fs rm_data_pool'  once there are no longer any
> objects associated with the CephFS instance on the pool, or will the MDS
> have ghost object records that cause the command to balk?

Just tested in a test cluster - it won't balk and won't demand force
even if you remove a pool that is actually used by files. So beware.

$ ceph osd pool create badfs_evilpool 32 ssd-only
pool 'badfs_evilpool' created
$ ceph fs add_data_pool badfs badfs_evilpool
added data pool 38 to fsmap
$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data
cephfs_data_wrongpool cephfs_data_rightpool cephfs_data_hdd ]
name: badfs, metadata pool: badfs_metadata, data pools: [badfs_data
badfs_evilpool ]
$ cephfs-shell -f badfs
CephFS:~/>>> ls
dir1/   dir2/
CephFS:~/>>> mkdir evil
CephFS:~/>>> setxattr evil ceph.dir.layout.pool badfs_evilpool
ceph.dir.layout.pool is successfully set to badfs_evilpool
CephFS:~/>>> put /usr/bin/ls /evil/ls
$ ceph fs rm_data_pool badfs badfs_evilpool
removed data pool 38 from fsmap

-- 
Alexander Patrakov
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to