[ceph-users] Re: Ceph Pacific mon is not starting after host reboot

2021-08-09 Thread Adam King
Wanted to respond to the original thread I saw archived on this topic but I wasn't subscribed to the mailing list yet so don't have the thread in my inbox to reply to. Hopefully, those involved in that thread still see this. This issue looks the same as https://tracker.ceph.com/issues/51027 which

[ceph-users] very low RBD and Cephfs performance

2021-08-09 Thread Prokopis Kitros
Hello I have a 4 nodes Ceph cluster on Azure. Each node is a E32s_v4 VM ,which has 32vcpus and 256GB memory.The network between nodes is 15GBit/sec measured with iperf. The OS is CentOS 8.2 .Ceph version is Pacific and was deployed with ceph-ansible. Three nodes have the OSDs and the fourth

[ceph-users] Re: Ceph Pacific mon is not starting after host reboot

2021-08-09 Thread David Orman
Hi, We are seeing very similar behavior on 16.2.5, and also have noticed that an undeploy/deploy cycle fixes things. Before we go rummaging through the source code trying to determine the root cause, has anybody else figured this out? It seems odd that a repeatable issue (I've seen other mailing

[ceph-users] Re: rbd object mapping

2021-08-09 Thread Tony Liu
Thank you Konstantin! Tony From: Konstantin Shalygin Sent: August 9, 2021 01:20 AM To: Tony Liu Cc: ceph-users; d...@ceph.io Subject: Re: [ceph-users] rbd object mapping On 8 Aug 2021, at 20:10, Tony Liu mailto:tonyliu0...@hotmail.com>> wrote: That's

[ceph-users] Multiple cephfs MDS crashes with same assert_condition: state == LOCK_XLOCK || state == LOCK_XLOCKDONE

2021-08-09 Thread Thomas Hukkelberg
Hi Today we suddenly experience multiple MDS crashes during the day with an error we have not seen earlier. We run octopus 15.2.13 with 4 ranks and 4 standby-reply MDSes and 1 passive standby. Any input on how to troubleshot or resolve this would be most welcome. --- root@hk-cephnode-54:~#

[ceph-users] Re: BUG #51821 - client is using insecure global_id reclaim

2021-08-09 Thread Ilya Dryomov
On Mon, Aug 9, 2021 at 5:14 PM Robert W. Eckert wrote: > > I have had the same issue with the windows client. > I had to issue > ceph config set mon auth_expose_insecure_global_id_reclaim false > Which allows the other clients to connect. > I think you need to restart the monitors as

[ceph-users] Re: BUG #51821 - client is using insecure global_id reclaim

2021-08-09 Thread Robert W. Eckert
I have had the same issue with the windows client. I had to issue ceph config set mon auth_expose_insecure_global_id_reclaim false Which allows the other clients to connect. I think you need to restart the monitors as well, because the first few times I tried this, I still couldn't

[ceph-users] Balanced use of HDD and SSD

2021-08-09 Thread E Taka
Hello all, a year ago we started with a 3-node-Cluster for Ceph with 21 HDD and 3 SSD, which we installed with Cephadm, configuring the disks with `ceph orch apply osd --all-available-devices` Over the time the usage grew quite significantly: now we have another 5 nodes with 8-12 HDD and 1-2 SSD

[ceph-users] Re: Size of cluster

2021-08-09 Thread Jorge JP
Hello, this is my osd tree: ID CLASS WEIGHT TYPE NAME -1 312.14557 root default -3 68.97755 host pveceph01 3hdd 10.91409 osd.3 14hdd 16.37109 osd.14 15hdd 16.37109 osd.15 20hdd 10.91409 osd.20 23

[ceph-users] Re: Size of cluster

2021-08-09 Thread Robert Sander
Hi, Am 09.08.21 um 12:56 schrieb Jorge JP: > 15 x 12TB = 180TB > 8 x 18TB = 144TB How are these distributed across your nodes and what is the failure domain? I.e. how will Ceph distribute data among them? > The raw size of this cluster (HDD) should be 295TB after format but the size > of my

[ceph-users] Re: "ceph orch ls", "ceph orch daemon rm" fail with exception "'KeyError: 'not'" on 15.2.10

2021-08-09 Thread Erkki Seppala
Hi, Might anyone have any insight for this issue? I have been unable to resolve it so far and it prevents many "ceph orch" commands and breaks many aspects of the Web user interface. -- _ / __// /__ __

[ceph-users] Size of cluster

2021-08-09 Thread Jorge JP
Hello, I have a ceph cluster with 5 nodes. I have 23 osds distributed in these one with hdd class. The disk size are: 15 x 12TB = 180TB 8 x 18TB = 144TB Result of execute "ceph df" command: --- RAW STORAGE --- CLASS SIZE AVAILUSED RAW USED %RAW USED hdd295 TiB 163 TiB 131

[ceph-users] Re: BUG #51821 - client is using insecure global_id reclaim

2021-08-09 Thread Daniel Persson
Hi Tobias and Richard. Thank you for answering my questions. I got the link suggested by Tobias on the issue report, which led me to further investigation. It was hard to see what version the kernel version on the system was using, but looking at the result of "ceph health detail" and ldd

[ceph-users] Re: BUG #51821 - client is using insecure global_id reclaim

2021-08-09 Thread Tobias Urdin
Hello, Did you follow the fix/recommendation when applying patches as per the documentation in the CVE security post [1] ? Best regards [1] https://docs.ceph.com/en/latest/security/CVE-2021-20288/ > On 9 Aug 2021, at 02:26, Richard Bade wrote: > > Hi Daniel, > I had a similar issue last week

[ceph-users] Re: rbd object mapping

2021-08-09 Thread Konstantin Shalygin
> On 8 Aug 2021, at 20:10, Tony Liu wrote: > > That's what I thought. I am confused by this. > > # ceph osd map vm fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk > osdmap e18381 pool 'vm' (4) object > 'fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk' -> pg 4.c7a78d40 (4.0) -> up > ([4,17,6], p4)