[ceph-users] Re: Setup Ceph over RDMA

2024-04-26 Thread Vahideh Alinouri
ent - name: ms_async_rdma_cm type: bool level: advanced default: false with_legacy: true - name: ms_async_rdma_type type: str level: advanced default: ib with_legacy: true It causes confusion and The RDMA setup needs more detail in the document. Regards On Mon, Apr 8, 2024 at 10:06 AM Vahideh Alinouri wro

[ceph-users] Add node-exporter using ceph orch

2024-04-26 Thread Vahideh Alinouri
Hi guys, I have tried to add node-exporter to the new host in ceph cluster by the command mentioned in the document. ceph orch apply node-exporter hostname I think there is a functionality issue because cephadm log print node-exporter was applied successfully, but it didn't work! I tried the

[ceph-users] Setup Ceph over RDMA

2024-04-08 Thread Vahideh Alinouri
Hi guys, I need setup Ceph over RDMA, but I faced many issues! The info regarding my cluster: Ceph version is Reef Network cards are Broadcom RDMA. RDMA connection between OSD nodes are OK. I just found ms_type = async+rdma config in document and apply it using ceph config set global ms_type

[ceph-users] cephadm purge cluster does not work

2024-02-23 Thread Vahideh Alinouri
Hi Guys, I faced an issue. When I wanted to purge, the cluster was not purged using the below command: ceph mgr module disable cephadm cephadm rm-cluster --force --zap-osds --fsid The OSDs will remain. There should be some cleanup methods for the whole cluster, not just MON nodes. Is there

[ceph-users] Add nats_adapter

2023-10-30 Thread Vahideh Alinouri
. Best regards, Vahideh Alinouri ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] header_limit in AsioFrontend class

2023-06-17 Thread Vahideh Alinouri
a configurable option introduced to set the header_limit value and the default value is 16384. I would greatly appreciate it if someone from the Ceph development team backport this change to the older version. Best regards, Vahideh Alinouri ___ ceph-users mailing

[ceph-users] Re: Recover pgs from failed osds

2020-09-05 Thread Vahideh Alinouri
ceph.io/thread/EDL7U5EWFHSFK5IIBRBNAIXX7IFWR5QK/ > [2] > > https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/F5MOI47FIVSFHULNNPWEJAY6LLDOVUJQ/ > > > Zitat von Vahideh Alinouri : > > > Is not any solution or advice? > > > > On Tue, Sep 1, 20

[ceph-users] Re: Recover pgs from failed osds

2020-09-01 Thread Vahideh Alinouri
Is not any solution or advice? On Tue, Sep 1, 2020, 11:53 AM Vahideh Alinouri wrote: > One of failed osd with 3G RAM started and dump_mempools shows total RAM > usage is 18G and buff_anon uses 17G RAM! > > On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri < > vahideh.alino..

[ceph-users] Re: Recover pgs from failed osds

2020-09-01 Thread Vahideh Alinouri
One of failed osd with 3G RAM started and dump_mempools shows total RAM usage is 18G and buff_anon uses 17G RAM! On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri wrote: > osd_memory_target of failed osd in one ceph-osd node changed to 6G but > other osd_memory_target is 3G, starting fail

[ceph-users] Re: Recover pgs from failed osds

2020-08-31 Thread Vahideh Alinouri
the opposite and turn up the memory_target and only try to > start a single OSD? > > > Zitat von Vahideh Alinouri : > > > osd_memory_target is changed to 3G, starting failed osd causes ceph-osd > > nodes crash! and failed osd is still "down" > &

[ceph-users] Re: Recover pgs from failed osds

2020-08-31 Thread Vahideh Alinouri
osd_memory_target is changed to 3G, starting failed osd causes ceph-osd nodes crash! and failed osd is still "down" On Fri, Aug 28, 2020 at 1:13 PM Vahideh Alinouri wrote: > Yes, each osd node has 7 osds with 4 GB memory_target. > > > On Fri, Aug 28, 2020, 12:48 PM Eugen B

[ceph-users] Re: Recover pgs from failed osds

2020-08-28 Thread Vahideh Alinouri
to reduce the memory_target to 3 > GB and see if they start successfully. > > > Zitat von Vahideh Alinouri : > > > osd_memory_target is 4294967296. > > Cluster setup: > > 3 mon, 3 mgr, 21 osds on 3 ceph-osd nodes in lvm scenario. ceph-osd > nodes > > resourc

[ceph-users] Re: Recover pgs from failed osds

2020-08-28 Thread Vahideh Alinouri
f that helps bring the OSDs back up. Splitting the PGs is a > very heavy operation. > > > Zitat von Vahideh Alinouri : > > > Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we > have > > high I/O wait. > > > > After increasing one of pool’s

[ceph-users] Recover pgs from failed osds

2020-08-27 Thread Vahideh Alinouri
vahideh.alino...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Recover pgs from failed osds

2020-08-27 Thread Vahideh Alinouri
Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we have high I/O wait. After increasing one of pool’s pg_num from 64 to 128 according to warning message (more objects per pg), this lead to high cpu load and ram usage on ceph-osd nodes and finally crashed the whole cluster.