[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-06 Thread Paul Giralt (pgiralt)
Thank you Ernesto.

Yes - so I see that all the eno1, eno2, and docker0 interfaces show up with an 
MTU of 1500 which is correct, but since these interfaces are not being used at 
all, they shouldn’t be flagged as a problem. I’ll just ignore the errors for 
now, but would be good to have a way to indicate that these interfaces are not 
being used.

-Paul


On Aug 6, 2021, at 12:45 PM, Ernesto Puerta 
mailto:epuer...@redhat.com>> wrote:

Hi Paul,

The Prometheus web UI is available at port 9095. It doesn't need any 
credentials to log in and you simply type the name of the metric 
("node_network_mtu_bytes") in the text box and you'll get the latest values:



As suggested, if you want to mute those alerts you can do that from the Cluster 
> Monitoring menu:



Kind Regards,
Ernesto



On Wed, Aug 4, 2021 at 10:07 PM Paul Giralt (pgiralt) 
mailto:pgir...@cisco.com>> wrote:
I’m seeing the same issue. I’m not familiar with where to access the 
“Prometheus UI”. Can you point me to some instructions on how to do this and 
I’ll gladly collect the output of that command.

FWIW, here are the interfaces on my machine:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp6s0:  mtu 9000 qdisc mq master 
bond0 state UP group default qlen 1000
link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
3: enp7s0:  mtu 9000 qdisc mq master 
bond0 state UP group default qlen 1000
link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
4: enp17s0:  mtu 9000 qdisc mq master 
bond1 state UP group default qlen 1000
link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
5: enp18s0:  mtu 9000 qdisc mq master 
bond1 state UP group default qlen 1000
link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
6: eno1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
link/ether ec:bd:1d:08:87:8e brd ff:ff:ff:ff:ff:ff
7: eno2:  mtu 1500 qdisc mq state DOWN group 
default qlen 1000
link/ether ec:bd:1d:08:87:8f brd ff:ff:ff:ff:ff:ff
8: bond1:  mtu 9000 qdisc noqueue state 
UP group default qlen 1000
link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
inet 10.9.192.196/24 brd 10.9.192.255 scope global 
noprefixroute bond1
   valid_lft forever preferred_lft forever
inet6 fe80::5e83:8fff:fe80:13a4/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
9: bond0:  mtu 9000 qdisc noqueue state 
UP group default qlen 1000
link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
inet 10.122.242.196/24 brd 10.122.242.255 scope 
global noprefixroute bond0
   valid_lft forever preferred_lft forever
inet6 fe80::5a97:bdff:fe8f:76d8/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
10: docker0:  mtu 1500 qdisc noqueue state 
DOWN group default
link/ether 02:42:d2:19:1a:28 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global 
docker0
   valid_lft forever preferred_lft forever

I did notice that docker0 has an MTU of 1500 as do the eno1 and eno2 interfaces 
which I’m not using. I’m not sure if that’s related to the error. I’ve been 
meaning to try changing the MTU on the eno interfaces just to see if that makes 
a difference but haven’t gotten around to it.

-Paul


> On Aug 4, 2021, at 2:31 PM, Ernesto Puerta 
> mailto:epuer...@redhat.com>> wrote:
>
> Hi J-P,
>
> Could you please go to the Prometheus UI and share the output of the
> following query "node_network_mtu_bytes"? That'd be useful to understand
> the issue. If you can open a tracker issue here:
> https://tracker.ceph.com/projects/dashboard/issues/new ?
>
> In the meantime you should be able to mute the alert (Cluster > Monitoring
>> Silences).
>
> Kind Regards,
> Ernesto
>
>
> On Wed, Aug 4, 2021 at 5:49 PM J-P Methot 
> mailto:jp.met...@planethoster.info>>
> wrote:
>
>> Hi,
>>
>> We're running Ceph 16.2.5 Pacific and, in the ceph dashboard, we keep
>> getting a MTU mismatch alert. However, all our hosts have the same
>> network configuration:
>>
>> => bond0:  mtu 9000 qdisc
>> noqueue state UPgroup default qlen 1000 => vlan.24@bond0:
>>  mtu 9000 qdisc noqueue state UPgroup
>> default qlen 1000
>>
>>
>> Physical interfaces, bond andvlans are all setto9000.
>>
>> The alert's message looks like this :
>>
>> Node node20 has a different MTU size (9000) than the median value on
>> device vlan.24.
>>
>> Is this a known Pacific bug? None of our other Ceph clusters does this
>> (they are running on Octopus/Nautilus).
>>
>> --
>>
>> Jean-Philippe Méthot
>> Senior Openstack system administrator
>> Administrateur système Openstack sénior
>> PlanetHoster inc.
>>
>> ___
>> ceph-users mailing list -- 

[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-06 Thread Ernesto Puerta
Thanks, Kai! We moved the Dashboard tickets to a separate subproject. I
just moved that tracker. It should be easy to remove at least the NICs down.

Kind Regards,
Ernesto


On Fri, Aug 6, 2021 at 9:13 AM Kai Stian Olstad 
wrote:

> On 04.08.2021 20:31, Ernesto Puerta wrote:
> > Could you please go to the Prometheus UI and share the output of the
> > following query "node_network_mtu_bytes"? That'd be useful to
> > understand
> > the issue. If you can open a tracker issue here:
> > https://tracker.ceph.com/projects/dashboard/issues/new ?
>
> Found a issue reported under MGR
> https://tracker.ceph.com/issues/52028 - mgr/dashboard: Incorrect MTU
> mismatch warning
>
> --
> Kai Stian Olstad
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-06 Thread Kai Stian Olstad

On 04.08.2021 20:31, Ernesto Puerta wrote:

Could you please go to the Prometheus UI and share the output of the
following query "node_network_mtu_bytes"? That'd be useful to 
understand

the issue. If you can open a tracker issue here:
https://tracker.ceph.com/projects/dashboard/issues/new ?


Found a issue reported under MGR
https://tracker.ceph.com/issues/52028 - mgr/dashboard: Incorrect MTU 
mismatch warning


--
Kai Stian Olstad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-04 Thread Paul Giralt (pgiralt)
Actually it’s also complaining about docker0 as well. Not sure how to change 
the MTU on that one though. It’s not even up. 

-Paul


> On Aug 4, 2021, at 5:24 PM, Paul Giralt  wrote:
> 
> Yes - you’re right. It’s complaining about eno1 and eno2 which I’m not using. 
> I’ll change those and it will probably make the error go away. I’m guessing 
> something changed between 16.2.4 and 16.2.5 because I didn’t start seeing 
> this error until after the upgrade. 
> 
> -Paul
> 
> 
>> On Aug 4, 2021, at 5:09 PM, Kai Stian Olstad  wrote:
>> 
>> On 04.08.2021 22:06, Paul Giralt (pgiralt) wrote:
>>> I did notice that docker0 has an MTU of 1500 as do the eno1 and eno2
>>> interfaces which I’m not using. I’m not sure if that’s related to the
>>> error. I’ve been meaning to try changing the MTU on the eno interfaces
>>> just to see if that makes a difference but haven’t gotten around to
>>> it.
>> 
>> If you look at the message it says which interface it is.
>> 
>> It does check and report on all the interfaces, even those that is in DOWN 
>> state which it shouldn't.
>> 
>> 
>> -- 
>> Kai Stian Olstad
> 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-04 Thread Paul Giralt (pgiralt)
Yes - you’re right. It’s complaining about eno1 and eno2 which I’m not using. 
I’ll change those and it will probably make the error go away. I’m guessing 
something changed between 16.2.4 and 16.2.5 because I didn’t start seeing this 
error until after the upgrade. 

-Paul


> On Aug 4, 2021, at 5:09 PM, Kai Stian Olstad  wrote:
> 
> On 04.08.2021 22:06, Paul Giralt (pgiralt) wrote:
>> I did notice that docker0 has an MTU of 1500 as do the eno1 and eno2
>> interfaces which I’m not using. I’m not sure if that’s related to the
>> error. I’ve been meaning to try changing the MTU on the eno interfaces
>> just to see if that makes a difference but haven’t gotten around to
>> it.
> 
> If you look at the message it says which interface it is.
> 
> It does check and report on all the interfaces, even those that is in DOWN 
> state which it shouldn't.
> 
> 
> -- 
> Kai Stian Olstad

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-04 Thread Kai Stian Olstad

On 04.08.2021 22:06, Paul Giralt (pgiralt) wrote:


I did notice that docker0 has an MTU of 1500 as do the eno1 and eno2
interfaces which I’m not using. I’m not sure if that’s related to the
error. I’ve been meaning to try changing the MTU on the eno interfaces
just to see if that makes a difference but haven’t gotten around to
it.


If you look at the message it says which interface it is.

It does check and report on all the interfaces, even those that is in 
DOWN state which it shouldn't.



--
Kai Stian Olstad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-04 Thread Paul Giralt (pgiralt)
I’m seeing the same issue. I’m not familiar with where to access the 
“Prometheus UI”. Can you point me to some instructions on how to do this and 
I’ll gladly collect the output of that command. 

FWIW, here are the interfaces on my machine: 

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp6s0:  mtu 9000 qdisc mq master 
bond0 state UP group default qlen 1000
link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
3: enp7s0:  mtu 9000 qdisc mq master 
bond0 state UP group default qlen 1000
link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
4: enp17s0:  mtu 9000 qdisc mq master 
bond1 state UP group default qlen 1000
link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
5: enp18s0:  mtu 9000 qdisc mq master 
bond1 state UP group default qlen 1000
link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
6: eno1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
link/ether ec:bd:1d:08:87:8e brd ff:ff:ff:ff:ff:ff
7: eno2:  mtu 1500 qdisc mq state DOWN group 
default qlen 1000
link/ether ec:bd:1d:08:87:8f brd ff:ff:ff:ff:ff:ff
8: bond1:  mtu 9000 qdisc noqueue state 
UP group default qlen 1000
link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
inet 10.9.192.196/24 brd 10.9.192.255 scope global noprefixroute bond1
   valid_lft forever preferred_lft forever
inet6 fe80::5e83:8fff:fe80:13a4/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
9: bond0:  mtu 9000 qdisc noqueue state 
UP group default qlen 1000
link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
inet 10.122.242.196/24 brd 10.122.242.255 scope global noprefixroute bond0
   valid_lft forever preferred_lft forever
inet6 fe80::5a97:bdff:fe8f:76d8/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
10: docker0:  mtu 1500 qdisc noqueue state 
DOWN group default
link/ether 02:42:d2:19:1a:28 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
   valid_lft forever preferred_lft forever

I did notice that docker0 has an MTU of 1500 as do the eno1 and eno2 interfaces 
which I’m not using. I’m not sure if that’s related to the error. I’ve been 
meaning to try changing the MTU on the eno interfaces just to see if that makes 
a difference but haven’t gotten around to it. 

-Paul


> On Aug 4, 2021, at 2:31 PM, Ernesto Puerta  wrote:
> 
> Hi J-P,
> 
> Could you please go to the Prometheus UI and share the output of the
> following query "node_network_mtu_bytes"? That'd be useful to understand
> the issue. If you can open a tracker issue here:
> https://tracker.ceph.com/projects/dashboard/issues/new ?
> 
> In the meantime you should be able to mute the alert (Cluster > Monitoring
>> Silences).
> 
> Kind Regards,
> Ernesto
> 
> 
> On Wed, Aug 4, 2021 at 5:49 PM J-P Methot 
> wrote:
> 
>> Hi,
>> 
>> We're running Ceph 16.2.5 Pacific and, in the ceph dashboard, we keep
>> getting a MTU mismatch alert. However, all our hosts have the same
>> network configuration:
>> 
>> => bond0:  mtu 9000 qdisc
>> noqueue state UPgroup default qlen 1000 => vlan.24@bond0:
>>  mtu 9000 qdisc noqueue state UPgroup
>> default qlen 1000
>> 
>> 
>> Physical interfaces, bond andvlans are all setto9000.
>> 
>> The alert's message looks like this :
>> 
>> Node node20 has a different MTU size (9000) than the median value on
>> device vlan.24.
>> 
>> Is this a known Pacific bug? None of our other Ceph clusters does this
>> (they are running on Octopus/Nautilus).
>> 
>> --
>> 
>> Jean-Philippe Méthot
>> Senior Openstack system administrator
>> Administrateur système Openstack sénior
>> PlanetHoster inc.
>> 
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MTU mismatch error in Ceph dashboard

2021-08-04 Thread Ernesto Puerta
Hi J-P,

Could you please go to the Prometheus UI and share the output of the
following query "node_network_mtu_bytes"? That'd be useful to understand
the issue. If you can open a tracker issue here:
https://tracker.ceph.com/projects/dashboard/issues/new ?

In the meantime you should be able to mute the alert (Cluster > Monitoring
> Silences).

Kind Regards,
Ernesto


On Wed, Aug 4, 2021 at 5:49 PM J-P Methot 
wrote:

> Hi,
>
> We're running Ceph 16.2.5 Pacific and, in the ceph dashboard, we keep
> getting a MTU mismatch alert. However, all our hosts have the same
> network configuration:
>
> => bond0:  mtu 9000 qdisc
> noqueue state UPgroup default qlen 1000 => vlan.24@bond0:
>  mtu 9000 qdisc noqueue state UPgroup
> default qlen 1000
>
>
> Physical interfaces, bond andvlans are all setto9000.
>
> The alert's message looks like this :
>
> Node node20 has a different MTU size (9000) than the median value on
> device vlan.24.
>
> Is this a known Pacific bug? None of our other Ceph clusters does this
> (they are running on Octopus/Nautilus).
>
> --
>
> Jean-Philippe Méthot
> Senior Openstack system administrator
> Administrateur système Openstack sénior
> PlanetHoster inc.
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io