Thank you Ernesto.

Yes - so I see that all the eno1, eno2, and docker0 interfaces show up with an 
MTU of 1500 which is correct, but since these interfaces are not being used at 
all, they shouldn’t be flagged as a problem. I’ll just ignore the errors for 
now, but would be good to have a way to indicate that these interfaces are not 
being used.

-Paul


On Aug 6, 2021, at 12:45 PM, Ernesto Puerta 
<epuer...@redhat.com<mailto:epuer...@redhat.com>> wrote:

Hi Paul,

The Prometheus web UI is available at port 9095. It doesn't need any 
credentials to log in and you simply type the name of the metric 
("node_network_mtu_bytes") in the text box and you'll get the latest values:

<image.png>

As suggested, if you want to mute those alerts you can do that from the Cluster 
> Monitoring menu:

<image.png>

Kind Regards,
Ernesto



On Wed, Aug 4, 2021 at 10:07 PM Paul Giralt (pgiralt) 
<pgir...@cisco.com<mailto:pgir...@cisco.com>> wrote:
I’m seeing the same issue. I’m not familiar with where to access the 
“Prometheus UI”. Can you point me to some instructions on how to do this and 
I’ll gladly collect the output of that command.

FWIW, here are the interfaces on my machine:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8<http://127.0.0.1/8> scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master 
bond0 state UP group default qlen 1000
    link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
3: enp7s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master 
bond0 state UP group default qlen 1000
    link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
4: enp17s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master 
bond1 state UP group default qlen 1000
    link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
5: enp18s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master 
bond1 state UP group default qlen 1000
    link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
6: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether ec:bd:1d:08:87:8e brd ff:ff:ff:ff:ff:ff
7: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group 
default qlen 1000
    link/ether ec:bd:1d:08:87:8f brd ff:ff:ff:ff:ff:ff
8: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state 
UP group default qlen 1000
    link/ether 5c:83:8f:80:13:a4 brd ff:ff:ff:ff:ff:ff
    inet 10.9.192.196/24<http://10.9.192.196/24> brd 10.9.192.255 scope global 
noprefixroute bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::5e83:8fff:fe80:13a4/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
9: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state 
UP group default qlen 1000
    link/ether 58:97:bd:8f:76:d8 brd ff:ff:ff:ff:ff:ff
    inet 10.122.242.196/24<http://10.122.242.196/24> brd 10.122.242.255 scope 
global noprefixroute bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::5a97:bdff:fe8f:76d8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state 
DOWN group default
    link/ether 02:42:d2:19:1a:28 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16<http://172.17.0.1/16> brd 172.17.255.255 scope global 
docker0
       valid_lft forever preferred_lft forever

I did notice that docker0 has an MTU of 1500 as do the eno1 and eno2 interfaces 
which I’m not using. I’m not sure if that’s related to the error. I’ve been 
meaning to try changing the MTU on the eno interfaces just to see if that makes 
a difference but haven’t gotten around to it.

-Paul


> On Aug 4, 2021, at 2:31 PM, Ernesto Puerta 
> <epuer...@redhat.com<mailto:epuer...@redhat.com>> wrote:
>
> Hi J-P,
>
> Could you please go to the Prometheus UI and share the output of the
> following query "node_network_mtu_bytes"? That'd be useful to understand
> the issue. If you can open a tracker issue here:
> https://tracker.ceph.com/projects/dashboard/issues/new ?
>
> In the meantime you should be able to mute the alert (Cluster > Monitoring
>> Silences).
>
> Kind Regards,
> Ernesto
>
>
> On Wed, Aug 4, 2021 at 5:49 PM J-P Methot 
> <jp.met...@planethoster.info<mailto:jp.met...@planethoster.info>>
> wrote:
>
>> Hi,
>>
>> We're running Ceph 16.2.5 Pacific and, in the ceph dashboard, we keep
>> getting a MTU mismatch alert. However, all our hosts have the same
>> network configuration:
>>
>> => bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc
>> noqueue state UPgroup default qlen 1000 => vlan.24@bond0:
>> <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UPgroup
>> default qlen 1000
>>
>>
>> Physical interfaces, bond andvlans are all setto9000.
>>
>> The alert's message looks like this :
>>
>> Node node20 has a different MTU size (9000) than the median value on
>> device vlan.24.
>>
>> Is this a known Pacific bug? None of our other Ceph clusters does this
>> (they are running on Octopus/Nautilus).
>>
>> --
>>
>> Jean-Philippe Méthot
>> Senior Openstack system administrator
>> Administrateur système Openstack sénior
>> PlanetHoster inc.
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
>> To unsubscribe send an email to 
>> ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to 
> ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to