[ceph-users] rbd deep copy in Luminous

2022-06-07 Thread Pardhiv Karri
Hi, We are currently on Ceph Luminous version (12.2.11). I don't see the "rbd deep cp" command in this version. Is it in a different version or release? If so, which one? If in another release, Mimic or later, is there a way to get it in Luminous? Thanks, Pardhiv _

[ceph-users] 270.98 GB was requested for block_db_size, but only 270.98 GB can be fulfilled

2022-06-07 Thread Torkil Svensgaard
Hi We are converting unmanaged OSDs from db/wal on SSD to managed OSDs with db/wal on NVMe. The boxes had 20 HDDs and 4 SSDs and will be changed to 22 HDDs, 2 SSDs and 2 NVMes, with 11 db/wal partitions on each NVMe for the HDDs. The old SSDs will be used for a flash pool. We calculated the

[ceph-users] Re: ceph orch: list of scheduled tasks

2022-06-07 Thread Adam King
For most of them there isn't currently. Part of the issue with it is that the tasks don't ever necessarily end. If you apply a mgr spec, cephadm will periodically check the spec against what it sees (e.g. where mgr daemons are currently located vs. where the spec says they should be) and make corre

[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-07 Thread Christophe BAILLON
Hello, thanks for your reply No did not stop autoscaler root@store-par2-node01:/home/user# ceph osd pool autoscale-status POOL SIZE TARGET SIZERATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 154

[ceph-users] Re: not so empty bucket

2022-06-07 Thread J. Eric Ivancich
You’ve provided convincing evidence that the bucket index is not correctly reflecting the data objects. So the next step would be to remove the bucket index entries for these 39 objects. It looks like you’ve already mapped which entries go to which bucket index shards (or you could redo your co

[ceph-users] Re: unknown object

2022-06-07 Thread J. Eric Ivancich
There could be a couple of things going on here. When you copy an object to a new bucket, it creates what’s widely known as a “shallow” copy. The head object gets a true copy, but all tail objects are shared between the two copies. There could also be occasional bugs or somehow an object delete

[ceph-users] ceph orch: list of scheduled tasks

2022-06-07 Thread Patrick Vranckx
Hi, When you change the configuration of your cluster whith 'ceph orch apply ..." or "ceph orch daemon ...", tasks are scheduled: [root@cephc003 ~]# ceph orch apply mgr --placement="cephc001 cephc002 cephc003" Scheduled mgr update... Is there a way to list all the pending tasks ? Regards,

[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-07 Thread Eugen Block
Hi, please share the output of 'ceph osd pool autoscale-status'. You have very low (too low) PG numbers per OSD (between 0 and 6), did you stop the autoscaler at an early stage? If you don't want to use the autoscaler you should increase the pg_num, but you could set autoscaler to warn mo

[ceph-users] Re: Convert existing folder on cephfs into subvolume

2022-06-07 Thread Milind Changire
You could set an xattr on the dir of your choice to convert it to a subvolume. eg. # setfattr -n ceph.dir.subvolume -v 1 my/favorite/dir/is/now/a/subvol1 You can also disable the subvolume feature by setting the xattr value to 0 (zero) But there are constraints on a subvolume dir, namely: * you c

[ceph-users] Convert existing folder on cephfs into subvolume

2022-06-07 Thread Stolte, Felix
Hey guys, we are using the ceph filesystem since Luminous and exporting subdirectories via samba as well as nfs. We did upgrade to Pacific and want to use the subvolume feature. Is it possible to convert a subdirectory into a subvolume without using data? Best regards Felix --