Hi,
Il 14/06/18 06:13, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 9:35 PM Alessandro De Salvo
<alessandro.desa...@roma1.infn.it> wrote:
Hi,
Il 13/06/18 14:40, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
<alessandro.desa...@roma1.infn.it> wrote:
Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to migrate pools.
I'm currently trying with rados export + import, but I get errors like
these:
Write #-9223372036854775808:00000000:::100001e1007.00000000:head#
omap_set_header failed: (95) Operation not supported
The command I'm using is the following:
rados export -p cephfs_data | rados import -p cephfs_data_new -
So, I have a few questions:
1) would it work to swap the cephfs data pools by renaming them while
the fs cluster is down?
2) how can I copy the old data pool into a new one without errors like
the ones above?
This won't work as you expected. some cephfs metadata records ID of data pool.
This is was suspecting too, hence the question, so thanks for confirming it.
Basically, once a cephfs filesystem is created the pool and structure
are immutable. This is not good, though.
3) plain copy from a fs to another one would also work, but I didn't
find a way to tell the ceph fuse clients how to mount different
filesystems in the same cluster, any documentation on it?
ceph-fuse /mnt/ceph --client_mds_namespace=cephfs_name
In the meantime I also found the same option for fuse and tried it. It
works with fuse, but it seems it's not possible to export via
nfs-ganesha multiple filesystems.
put client_mds_namespace option to client section of ceph.conf (the
machine the run ganesha)
Yes, that would work but then I need a (set of) exporter(s) for every
cephfs filesystem. That sounds reasonable though, as it's the same
situation as for the mds services.
Thanks for the hint,
Alessandro
Anyone tried it?
4) even if I found a way to mount via fuse different filesystems
belonging to the same cluster, is this feature stable enough or is it
still super-experimental?
very stable
Very good!
Thanks,
Alessandro
Thanks,
Alessandro
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com