[ceph-users] ceph status intermittently outputs "0 slow ops"

2021-12-22 Thread 大神祐真
Hi, The "ceph status" intermittently shows "0 slow ops" . Could you tell me how should I handle this problem and what does "0 slow ops" mean? I investigated by referring the following documents, but no luck.

[ceph-users] Re: min_size ambiguity

2021-12-22 Thread norman.kern
Chad, As the document noted,  min_size means "Minimum number of replicas to serve the request",  so you can't read when number of PGs below min_size. Norman Best regards On 12/17/21 10:59 PM, Chad William Seys wrote: ill open an issue to h ___

[ceph-users] Re: Where do I find information on the release timeline for quincy?

2021-12-22 Thread norman.kern
Joshua, Quincy should release in March 2022, You can find the release cycle and standards from https://docs.ceph.com/en/latest/releases/general/ Norman Best regards On 12/22/21 9:37 PM, Joshua West wrote: Where do I find information on the release timeline for quincy? I learned a lesson

[ceph-users] Re: Local NTP servers on monitor node's.

2021-12-22 Thread mhnx
Hi Robert! Quote: "Just my 2¢: Do not use systemd-timesyncd." I've been using systemd-timesyncd on arch-linux since 2015. I built up 7 clusters and I had only 1 NTP failure on 1 cluster and I can't blame systemd-timesyncd about it. The right way is having NTP servers on Monitor nodes. I was

[ceph-users] Re: RBD bug #50787

2021-12-22 Thread Peter Lieven
Am 22.12.21 um 16:39 schrieb J-P Methot: > So, from what I understand from this, neither the latest nautilus client nor > the latest octopus client has the fix? Only the latest pacific? Are you sure that this issue is reproducible in Nautilus? I tried it with a Nautilus 14.2.22 client and it

[ceph-users] Re: 50% IOPS performance drop after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

2021-12-22 Thread Mark Nelson
On 12/22/21 4:23 AM, Marc wrote: I guess what caused the issue was high latencies on our “big” SSD’s (7TB drives), which got really high after the upgrade to Octopus. We split them into 4OSD’s some days ago and since then the high commit latencies on the OSD’s and on bluestore are gone Hmm,

[ceph-users] Where do I find information on the release timeline for quincy?

2021-12-22 Thread Joshua West
Where do I find information on the release timeline for quincy? I learned a lesson some time ago with regard to building from source and accidentally upgrading my cluster to the dev branch. whoops. Just wondering if there is a published timeline on the next major release, so I can figure out my

[ceph-users] Re: airgap install

2021-12-22 Thread Zoran Bošnjak
Kai, yes, it looks so. Thanks for the suggestion. I am experimenting in an environment with /etc/hosts file on each server, without DNS. The /etc/hosts file is correct and complete. I can resolve the hostname correctly, but on the host server(s) only. I was not aware of the fact that docker

[ceph-users] Re: ceph-volume inventory should consider free PVs

2021-12-22 Thread Konstantin Shalygin
Hi, > On 22 Dec 2021, at 13:10, Robert Sander wrote: > > ceph-volume inventory (and thus the orchestrator) only considers a block > device free when there is literally nothing on it. > > Would it make sense to add physical volumes from LVM here, too? > > I see use cases like a large NVMe

[ceph-users] Re: RBD bug #50787

2021-12-22 Thread Konstantin Shalygin
I mean definitely this! Of course, if your client machines is not serve ceph-mon, ceph-mds, ceph-osd processes - just upgrade ceph packages k > On 22 Dec 2021, at 12:32, huxia...@horebdata.cn wrote: > > Can i only upgrade clients from Nautilus to Pacific, but still keep Nautilus > version in

[ceph-users] Re: Large latency for single thread

2021-12-22 Thread Marc
> > Persistent client side cache potentially may help in this case if you > are ok with the trade-offs.  It's been a while since I've seen any > benchmarks with it so you may need to do some testing yourself. I would be interested to seeing these test results also.

[ceph-users] Re: 50% IOPS performance drop after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

2021-12-22 Thread Marc
> I guess what caused the issue was high latencies on our “big” SSD’s (7TB > drives), which got really high after the upgrade to Octopus. We split them > into 4OSD’s some days ago and since then the high commit latencies on the > OSD’s and on bluestore are gone Hmm, but this is sort of a work

[ceph-users] ceph-volume inventory should consider free PVs

2021-12-22 Thread Robert Sander
Hi, ceph-volume inventory (and thus the orchestrator) only considers a block device free when there is literally nothing on it. Would it make sense to add physical volumes from LVM here, too? I see use cases like a large NVMe that should hold two OSDs, a RAID1 of two SSDs for rocks db

[ceph-users] Re: RBD bug #50787

2021-12-22 Thread huxia...@horebdata.cn
Hi, Konstantin, Can i only upgrade clients from Nautilus to Pacific, but still keep Nautilus version in the cluster? just in order to avoid this librbd issue cheers, Samuel huxia...@horebdata.cn From: Konstantin Shalygin Date: 2021-12-22 07:47 To: J-P Methot CC: ceph-users Subject: