[ceph-users] Re: ceph on two public networks - not working

2021-12-17 Thread Anthony D'Atri
The terminology here can be subtle. The `public_network` value AIUI in part is an ACL of sorts. Comma-separated values are documented and permissable. The larger CIDR block approach also works. The address(s) that mons bind / listen to are a different matter. > On 16.12.21 21:57, Andrei

[ceph-users] Re: Luminous: export and migrate rocksdb to dedicated lvm/unit

2021-12-17 Thread Igor Fedotov
Hey Flavio, I think there are no options other then either upgrade the cluster or backport the relevant bluefs migration code to Lumnous and make a custom build. Thanks, Igor On 12/17/2021 4:43 PM, Flavio Piccioni wrote: Hi all, in a Luminous+Bluestore cluster, I would like to migrate

[ceph-users] min_size ambiguity

2021-12-17 Thread Chad William Seys
Hi all, The documentation for "min_size" says "Sets the minimum number of replicas required for I/O". https://docs.ceph.com/en/latest/rados/operations/pools/ Can anyone confirm that a PG below "min_size" but still online can still be read? If someone says "the PG can be read" I will open

[ceph-users] Luminous: export and migrate rocksdb to dedicated lvm/unit

2021-12-17 Thread Flavio Piccioni
Hi all, in a Luminous+Bluestore cluster, I would like to migrate rocksdb (including wal) to nvme (lvm). (output comes from test env. with minimum sized hdd to test procedures) ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0 infering bluefs devices from bluestore path {

[ceph-users] Re: ceph on two public networks - not working

2021-12-17 Thread Robert Sander
On 16.12.21 21:57, Andrei Mikhailovsky wrote: public_network = 192.168.168.0/24,192.168.169.0/24 AFAIK there is only one public_network possible. In your case you could try with 192.168.168.0/23, as both networks are direct neighbors bitwise. Regards -- Robert Sander Heinlein Consulting

[ceph-users] Re: Cephalocon 2022 deadline extended?

2021-12-17 Thread Dan van der Ster
Yes the Cephalocon CfP has been extended until Sunday the 19th! https://linuxfoundation.smapply.io/prog/cephalocon_2022/ On Fri, Dec 10, 2021 at 8:28 PM Bobby wrote: > > one typing mistakeI meant 19 December 2021 > > On Fri, Dec 10, 2021 at 8:21 PM Bobby wrote: > > > > > Hi all, > > > >

[ceph-users] Re: cephfs quota used

2021-12-17 Thread Jesper Lykkegaard Karlsen
Thanks Konstantin, Actually, I went a bit further and made the script more universal in usage: ceph_du_dir: # usage: ceph_du_dir $DIR1 ($DIR2 .) for i in $@; do if [[ -d $i && ! -L $i ]]; then echo "$(numfmt --to=iec --suffix=B --padding=7 $(getfattr --only-values -n ceph.dir.rbytes $i

[ceph-users] Re: bunch of " received unsolicited reservation grant from osd" messages in log

2021-12-17 Thread Kenneth Waegeman
Hi all, I'm also seeing these messages spamming the logs after update from octopus to pacific 16.2.7. Any clue yet what this means? Thanks!! Kenneth On 29/10/2021 22:21, Alexander Y. Fomichev wrote: Hello. After upgrading to 'pacific' I found log spammed by messages like this: ...

[ceph-users] Re: airgap install

2021-12-17 Thread Sebastian Wagner
Hi Zoran, I'd like to have this properly documented in the Ceph documentation as well.  I just created https://github.com/ceph/ceph/pull/44346 to add the monitoring images to that section. Feel free to review this one. Sebastian Am 17.12.21 um 11:06 schrieb Zoran Bošnjak: > Kai, thank you for

[ceph-users] Re: airgap install

2021-12-17 Thread Zoran Bošnjak
Kai, thank you for your answer. It looks like the "ceph config set mgr..." commands are the key part, to specify my local registry. However, I haven't got that far with the installation. I have tried various options, but I have problems already with the bootstrap step. I have documented the

[ceph-users] Re: cephfs quota used

2021-12-17 Thread Konstantin Shalygin
Or you can mount with 'dirstat' option and use 'cat .' for determine CephFS stats: alias fsdf="cat . | grep rbytes | awk '{print \$2}' | numfmt --to=iec --suffix=B" [root@host catalog]# fsdf 245GB [root@host catalog]# Cheers, k > On 17 Dec 2021, at 00:25, Jesper Lykkegaard Karlsen wrote: >

[ceph-users] crush rule for 4 copy over 3 failure domains?

2021-12-17 Thread Simon Oosthoek
Dear ceph users, Since recently we have 3 locations with ceph osd nodes, for 3 copy pools, it is trivial to create a crush rule that uses all 3 datacenters for each block, but 4 copy is harder. Our current "replicated" rule is this: rule replicated_rule { id 0 type replicated