[ceph-users] reduce mds_beacon_interval and mds_beacon_grace

2023-11-13 Thread Dmitry Melekhov
Hello! I guess that default 4 and 15 seconds are too long for my configuration. If I want to reduce mds_beacon_grace to 5 seconds  is 1 right value for  mds_beacon_interval ? Thank you! ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: CephFS mirror very slow (maybe for small files?)

2023-11-13 Thread Jos Collin
Hi Stuart, I would highly recommend you to have this [1] fix, so that the mirroring works as expected and uses the prev snapshot for syncing. Having multiple mirror daemons also improves the speed. [1] https://github.com/ceph/ceph/pull/54405 - Jos Collin On 13/11/23 21:31, Stuart Cornell

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Nizamudeen A
dashboard changes are minimal and approved. and since the dashboard change is related to the monitoring stack (prometheus..) which is something not covered in the dashboard test suites, I don't think running it is necessary. But maybe the cephadm suite has some monitoring stack related testings

[ceph-users] Re: CephFS mirror very slow (maybe for small files?)

2023-11-13 Thread Peter Grandi
> the speed of data transfer is varying a lot over time (200KB/s > – 120MB/s). [...] The FS in question, has a lot of small files > in it and I suspect this is the cause of the variability – ie, > the transfer of many small files will be more impacted by > greater site-site latency. 200KB/s on

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Yuri Weinstein
Ack Travis. Since it touches a dashboard, Nizam - please reply/approve. I assume that rados/dashboard tests will be sufficient, but expecting your recommendations. This addition will make the final release likely to be pushed. On Mon, Nov 13, 2023 at 11:30 AM Travis Nielsen wrote: > > I'd

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Laura Flores
Thanks Travis! @Yuri Weinstein In this case, I think we should rerun the full orch suite on the new build, + a subset of mgr tests from the rados suite. How does that sound? - Laura On Mon, Nov 13, 2023 at 1:30 PM Travis Nielsen wrote: > I'd like to see these changes for much improved

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Travis Nielsen
I'd like to see these changes for much improved dashboard integration with Rook. The changes are to the rook mgr orchestrator module, and supporting test changes. Thus, this should be very low risk to the ceph release. I don't know the details of the tautology suites, but I would think suites

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Yuri Weinstein
Redouane What would be a sufficient level of testing (tautology suite(s)) assuming this PR is approved to be added? On Mon, Nov 13, 2023 at 9:13 AM Redouane Kachach wrote: > > Hi Yuri, > > I've just backported to reef several fixes that I introduced in the last > months for the rook

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Yuri Weinstein
Josh, Travis, Neha - I can't accept this change without your approval. Please reply. On Mon, Nov 13, 2023 at 9:13 AM Redouane Kachach wrote: > > Hi Yuri, > > I've just backported to reef several fixes that I introduced in the last > months for the rook orchestrator. Most of them are fixes for

[ceph-users] Join us for the User + Dev Monthly Meetup - November 16!

2023-11-13 Thread Laura Flores
Hi Ceph users and developers, You are invited to join us at the User + Dev meeting this week Thursday, November 16th at 10:00 AM EST! See below for more meeting details. The focus topic, "Operational Reliability and Flexibility in Ceph Upgrades", will be presented by Christian Theune. His

[ceph-users] shrink db size

2023-11-13 Thread Curt
Hello, As far as I can tell there is no way to shrink a db/wal after creation. I recently added a new server to my cluster with SSD's for the wal/db and just used the ceph dashboard for deployment. I did not specify a db size, which is my mistake, it seems by default it uses "block.db has no

[ceph-users] Re: Debian 12 support

2023-11-13 Thread Luke Hall
On 13/11/2023 16:28, Daniel Baumann wrote: On 11/13/23 17:14, Luke Hall wrote: How is it that Proxmox were able to release Debian12 packages for Quincy quite some time ago? because you can, as always, just (re-)build the package yourself. I guess I was just trying to point out that there

[ceph-users] Re: Debian 12 support

2023-11-13 Thread Daniel Baumann
On 11/13/23 17:14, Luke Hall wrote: > How is it that Proxmox were able to release Debian12 packages for Quincy > quite some time ago? because you can, as always, just (re-)build the package yourself. > My understanding is that they change almost nothing in their packages > and just roll them to

[ceph-users] No SSL Dashboard working after installing mgr crt|key with RSA/4096 secp384r1

2023-11-13 Thread Ackermann, Christoph
Hello all, today i got a new certificate for our internal domain based on RSA/4096 secp384r1. After inserting CRT and Key i got both "...updated" messages. After checking the dashboard i got an empty page and this error: health: HEALTH_ERR Module 'dashboard' has failed: key type

[ceph-users] Re: Debian 12 support

2023-11-13 Thread Luke Hall
How is it that Proxmox were able to release Debian12 packages for Quincy quite some time ago? https://download.proxmox.com/debian/ceph-quincy/dists/ My understanding is that they change almost nothing in their packages and just roll them to fit with their naming schema etc. On 01/11/2023

[ceph-users] Re: RGW: user modify default_storage_class does not work

2023-11-13 Thread Casey Bodley
my understanding is that default placement is stored at the bucket level, so changes to the user's default placement only take effect for newly-created buckets On Sun, Nov 12, 2023 at 9:48 PM Huy Nguyen wrote: > > Hi community, > I'm using Ceph version 16.2.13. I tried to set

[ceph-users] Re: Debian 12 support

2023-11-13 Thread Matthew Vernon
Hi, On 13/11/2023 10:42, Chris Palmer wrote: And another big +1 for debian12 reef from us. We're unable to upgrade to either debian12 or reef. I've been keeping an eye on the debian12 bug, and it looks as though it might be fixed if you start from the latest repo release. My expectation is

[ceph-users] CephFS mirror very slow (maybe for small files?)

2023-11-13 Thread Stuart Cornell
Hi all. I have successfully configured an operational mirror between 2 sites for Ceph FS. The mirroring is running but the speed of data transfer is varying a lot over time (200KB/s – 120MB/s). The network infrastructure between the two Ceph clusters is reliable and should not be the cause of

[ceph-users] Re: CEPH Cluster mon is out of quorum

2023-11-13 Thread Eugen Block
Is this the same cluster as the one your reported down OSDs for? Can you share the logs from before the "probing" status? You may have to increase the log level to something like debug_mon = 20. But be cautious and monitor the used disk space, it can increase quite a lot. Did you have any

[ceph-users] Re: OSD disk is active in node but ceph show osd down and out

2023-11-13 Thread Eugen Block
Hi, can you share the following output: ceph -s ceph health detail ceph versions ceph osd df tree ceph osd dump I see this line in the logs: check_osdmap_features require_osd_release unknown -> octopus which makes me wonder if you really run a Nautilus cluster. Are your OSDs saturated?

[ceph-users] Re: Ceph Allocation - used space is unreasonably higher than stored space

2023-11-13 Thread Igor Fedotov
Hi Motahare, On 13/11/2023 14:44, Motahare S wrote: Hello everyone, Recently we have noticed that the results of "ceph df" stored and used space does not match; as the amount of stored data *1.5 (ec factor) is still like 5TB away from used amount: POOLID PGS

[ceph-users] Ceph Allocation - used space is unreasonably higher than stored space

2023-11-13 Thread Motahare S
Hello everyone, Recently we have noticed that the results of "ceph df" stored and used space does not match; as the amount of stored data *1.5 (ec factor) is still like 5TB away from used amount: POOLID PGS STORED OBJECTS USED %USED MAX AVAIL

[ceph-users] Re: Debian 12 support

2023-11-13 Thread Chris Palmer
And another big +1 for debian12 reef from us. We're unable to upgrade to either debian12 or reef. I've been keeping an eye on the debian12 bug, and it looks as though it might be fixed if you start from the latest repo release. Thanks, Chris On 13/11/2023 07:43, Berger Wolfgang wrote: +1 for

[ceph-users] Re: CEPH Cluster performance review

2023-11-13 Thread Alexander E. Patrakov
Hello Mosharaf, There is an automated service available that will criticize your cluster: https://analyzer.clyso.com/#/analyzer On Sun, Nov 12, 2023 at 12:03 PM Mosharaf Hossain < mosharaf.hoss...@bol-online.com> wrote: > Hello Community > > Currently, I operate a CEPH Cluster utilizing Ceph

[ceph-users] CEPH Cluster mon is out of quorum

2023-11-13 Thread Mosharaf Hossain
Dear Concern I am observing a mon is out of quorum. The current running version of Ceph is octopus. Total Node in the cluster: 13 Mon: 3/3 Network: 20G(10Gx 2) bonded link Each node capacity: 512GB RAM + 72 Core CPU root@ceph6:/var/run/ceph# systemctl status ceph-mon@ceph6.service â—