[ceph-users] Is autoscaler doing the right thing?

2023-02-08 Thread Kyriazis, George
Hello ceph community, I have some questions about the pg autoscaler. I have a cluster with several pools. One of them is a cephfs pool which is behaving in an expected / sane way, and another is a RBD pool with an ec profile of k=2, m=2. The cluster has about 60 drives across across about 10

[ceph-users] Re: Adding osds to each nodes

2023-02-08 Thread Szabo, Istvan (Agoda)
Ok, seems better to add all disks to host by hosts with waiting for rebalance between each of them, thx. Istvan Szabo Staff Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com

[ceph-users] Re: Exit yolo mode by increasing size/min_size does not (really) work

2023-02-08 Thread Eugen Block
Hi, I don't have an explanation yet but some more information about your cluster would be useful like 'ceph osd tree', 'ceph osd df', 'ceph status' etc. Thanks, Eugen Zitat von Stefan Pinter : Hi!  It would be very kind of you to help us with that! We have pools in our ceph cluster

[ceph-users] Re: Adding osds to each nodes

2023-02-08 Thread Eugen Block
Hi, this is a quite common question and multiple threads exist on this topic, e.g. [1]. Regards, Eugen [1] https://www.mail-archive.com/ceph-users@lists.ceph.com/msg36475.html Zitat von "Szabo, Istvan (Agoda)" : Hi, What is the safest way to add disk(s) to each of the node in the

[ceph-users] Re: OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-02-08 Thread Eugen Block
Hi, Someone told me that we could just destroy the FileStore OSD’s and recreate them as BlueStore, even though the cluster is partially upgraded. So I guess I’ll just do that. (Unless someone here tells me that that’s a terrible idea :)) I would agree, rebuilding seems a reasonable

[ceph-users] Re: Nautilus to Octopus when RGW already on Octopus

2023-02-08 Thread Eugen Block
Hi, I also would try to avoid a downgrade and agree with Richard. From a ceph perspective the RGWs are just clients and not core services so I wouldn't worry too much if they already are on a newer version. Although I didn't test this specific scenario either there's one cluster we help

[ceph-users] OSD logs missing from Centralised Logging

2023-02-08 Thread Peter van Heusden
Hi there I am running Ceph version 17.2.5 and have deployed centralised logging as per this guide: https://ceph.io/en/news/blog/2022/centralized_logging/ The logs from the OSDs are not, however, showing up in the Grafana dashboard, as per this screenshot: [image: image.png] The Promtail

[ceph-users] Re: RGW archive zone lifecycle

2023-02-08 Thread Matt Benjamin
Hi Ondřej, Yes, we added an extension to allow writing lifecycle policy which will only take effect in archive zone(s). It's currently present on ceph/main, and will be in Reef. Matt On Wed, Feb 8, 2023 at 2:10 AM Ondřej Kukla wrote: > Hi, > > I have two Ceph clusters in a multi-zone setup.

[ceph-users] Adding osds to each nodes

2023-02-08 Thread Szabo, Istvan (Agoda)
Hi, What is the safest way to add disk(s) to each of the node in the cluster? Should it be done 1 by 1 or can add all of them at once and let it rebalance? My concern is that if add all in one due to host based EC code it will block all the host. The other side if I add 1 by 1, one node will

[ceph-users] Re: Replacing OSD with containerized deployment

2023-02-08 Thread mailing-lists
Hey, no problem and thank you! This is the output of lsblk: sda 8:0    0  14.6T  0 disk └─ceph--937823b8--204b--4190--9bd1--f867e64621db-osd--block--a4bbaa5d--eb2d--41f3--8f4e--f8c5a2747012 253:24   0  14.6T  0 lvm sdb 8:16   0  14.6T  0 disk