ent
- name: ms_async_rdma_cm
type: bool
level: advanced
default: false
with_legacy: true
- name: ms_async_rdma_type
type: str
level: advanced
default: ib
with_legacy: true
It causes confusion and The RDMA setup needs more detail in the document.
Regards
On Mon, Apr 8, 2024 at 10:06 AM Vahideh Alinouri
wro
Hi guys,
I have tried to add node-exporter to the new host in ceph cluster by
the command mentioned in the document.
ceph orch apply node-exporter hostname
I think there is a functionality issue because cephadm log print
node-exporter was applied successfully, but it didn't work!
I tried the
Hi guys,
I need setup Ceph over RDMA, but I faced many issues!
The info regarding my cluster:
Ceph version is Reef
Network cards are Broadcom RDMA.
RDMA connection between OSD nodes are OK.
I just found ms_type = async+rdma config in document and apply it using
ceph config set global ms_type
Hi Guys,
I faced an issue. When I wanted to purge, the cluster was not purged
using the below command:
ceph mgr module disable cephadm
cephadm rm-cluster --force --zap-osds --fsid
The OSDs will remain. There should be some cleanup methods for the
whole cluster, not just MON nodes. Is there
.
Best regards,
Vahideh Alinouri
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
a configurable option introduced to set the header_limit value
and the default value is 16384.
I would greatly appreciate it if someone from the Ceph development
team backport this change to the older version.
Best regards,
Vahideh Alinouri
___
ceph-users mailing
ceph.io/thread/EDL7U5EWFHSFK5IIBRBNAIXX7IFWR5QK/
> [2]
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/F5MOI47FIVSFHULNNPWEJAY6LLDOVUJQ/
>
>
> Zitat von Vahideh Alinouri :
>
> > Is not any solution or advice?
> >
> > On Tue, Sep 1, 20
Is not any solution or advice?
On Tue, Sep 1, 2020, 11:53 AM Vahideh Alinouri
wrote:
> One of failed osd with 3G RAM started and dump_mempools shows total RAM
> usage is 18G and buff_anon uses 17G RAM!
>
> On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri <
> vahideh.alino..
One of failed osd with 3G RAM started and dump_mempools shows total RAM
usage is 18G and buff_anon uses 17G RAM!
On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri
wrote:
> osd_memory_target of failed osd in one ceph-osd node changed to 6G but
> other osd_memory_target is 3G, starting fail
the opposite and turn up the memory_target and only try to
> start a single OSD?
>
>
> Zitat von Vahideh Alinouri :
>
> > osd_memory_target is changed to 3G, starting failed osd causes ceph-osd
> > nodes crash! and failed osd is still "down"
> &
osd_memory_target is changed to 3G, starting failed osd causes ceph-osd
nodes crash! and failed osd is still "down"
On Fri, Aug 28, 2020 at 1:13 PM Vahideh Alinouri
wrote:
> Yes, each osd node has 7 osds with 4 GB memory_target.
>
>
> On Fri, Aug 28, 2020, 12:48 PM Eugen B
to reduce the memory_target to 3
> GB and see if they start successfully.
>
>
> Zitat von Vahideh Alinouri :
>
> > osd_memory_target is 4294967296.
> > Cluster setup:
> > 3 mon, 3 mgr, 21 osds on 3 ceph-osd nodes in lvm scenario. ceph-osd
> nodes
> > resourc
f that helps bring the OSDs back up. Splitting the PGs is a
> very heavy operation.
>
>
> Zitat von Vahideh Alinouri :
>
> > Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we
> have
> > high I/O wait.
> >
> > After increasing one of pool’s
vahideh.alino...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we have
high I/O wait.
After increasing one of pool’s pg_num from 64 to 128 according to warning
message (more objects per pg), this lead to high cpu load and ram usage on
ceph-osd nodes and finally crashed the whole cluster.
15 matches
Mail list logo