[ceph-users] Re: Network performance checks

2020-01-30 Thread Stefan Kooman
Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com): > After having upgraded my ceph cluster from Luminous to Nautilus 14.2.6 , > from time to time "ceph health detail" claims about some"Long heartbeat > ping times on front/back interface seen". > > As far as I can understand (after having r

[ceph-users] Re: Network performance checks

2020-01-30 Thread Massimo Sgaravatto
Thanks for your answer MON-MGR hosts have a mgmt network and a public network. OSD nodes have instead a mgmt network, a public network. and a cluster network This is what I have in ceph.conf: public network = 192.168.61.0/24 cluster network = 192.168.222.0/24 public and cluster networks are 10

[ceph-users] Re: Network performance checks

2020-01-30 Thread Stefan Kooman
Hi, Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com): > Thanks for your answer > > MON-MGR hosts have a mgmt network and a public network. > OSD nodes have instead a mgmt network, a public network. and a cluster > network > This is what I have in ceph.conf: > > public network = 192.168

[ceph-users] Re: Network performance checks

2020-01-31 Thread Massimo Sgaravatto
I am seeing very few of such error messages in the mon logs (~ a couple per day) If I issue on every OSD the command "ceph daemon osd.$id dump_osd_network" with the default 1000 ms threshold, I can't see entries. I guess this is because that command considers only the last (15 ?) minutes. Am I sup