Public bug reported:

Hello!

I would like to report a possible bug.
We currently using Rocky with Ubuntu 18.04.
We use custom ansible for deployment.

We have a setup, where the upstream core Cisco nexus DC switches answers
to RA-s. This works fine with a network, which we had for years
(upgraded from kilo)

Now, we made a new region, with new network nodes, etc. and the IPv6 not
works as in the old region.

In the new region, we had this subnet:

[PROD][root(cc1:0)] <~> openstack subnet show Flat1-subnet-v6
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | 2001:738:0:527::2-2001:738:0:527:ffff:ffff:ffff:ffff |
| cidr              | 2001:738:0:527::/64                                  |
| created_at        | 2020-07-01T22:59:53Z                                 |
| description       |                                                      |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        | 2001:738:0:527::1                                    |
| host_routes       |                                                      |
| id                | a5a9991c-62f3-4f46-b1ef-e293dc0fb781                 |
| ip_version        | 6                                                    |
| ipv6_address_mode | slaac                                                |
| ipv6_ra_mode      | None                                                 |
| name              | Flat1-subnet-v6                                      |
| network_id        | fa55bfc7-ab42-4d97-987e-645cca7a0601                 |
| project_id        | b48a9319a66e45f3b04cc8bb70e3113c                     |
| revision_number   | 0                                                    |
| segment_id        | None                                                 |
| service_types     |                                                      |
| subnetpool_id     | None                                                 |
| tags              |                                                      |
| updated_at        | 2020-07-01T22:59:53Z                                 |
+-------------------+------------------------------------------------------+

As you can see, the address mode is SLAAC, the RA mode is: None.

Checking from network node, we see the qrouter:

[PROD][root(net1:0)] </home/ocadmin> ip netns exec 
qrouter-4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
35: ha-5dfb8647-f7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
    link/ether fa:16:3e:1c:4d:8d brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.3/18 brd 169.254.255.255 scope global ha-5dfb8647-f7
       valid_lft forever preferred_lft forever
    inet 169.254.0.162/24 scope global ha-5dfb8647-f7
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe1c:4d8d/64 scope link
       valid_lft forever preferred_lft forever
36: qr-a6d7ceab-80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
state UNKNOWN group default qlen 1000
    link/ether fa:16:3e:a1:7e:69 brd ff:ff:ff:ff:ff:ff
    inet 193.224.218.251/24 scope global qr-a6d7ceab-80
       valid_lft forever preferred_lft forever
    inet6 2001:738:0:527:f816:3eff:fea1:7e69/64 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fea1:7e69/64 scope link nodad
       valid_lft forever preferred_lft forever

If I check the running process on our net1 node, I got this:

[PROD][root(net1:0)] </home/ocadmin> ps aux |grep radvd |grep 
4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1
neutron  32540  0.0  0.0  19604  2372 ?        Ss   júl02   0:05 radvd -C 
/var/lib/neutron/ra/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.radvd.conf -p 
/var/lib/neutron/external/pids/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.pid.radvd 
-m syslog -u neutron


The specific radvd config:
[PROD][root(net1:0)] </home/ocadmin> cat 
/var/lib/neutron/ra/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.radvd.conf
interface qr-a6d7ceab-80
{
   AdvSendAdvert on;
   MinRtrAdvInterval 30;
   MaxRtrAdvInterval 100;
   AdvLinkMTU 1500;
};

If I spin up an instance, I see this:

debian@test:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
    link/ether fa:16:3e:71:ca:8d brd ff:ff:ff:ff:ff:ff
    inet 193.224.218.9/24 brd 193.224.218.255 scope global dynamic eth0
       valid_lft 86353sec preferred_lft 86353sec
    inet6 2001:738:0:527:f816:3eff:fe71:ca8d/64 scope global dynamic mngtmpaddr
       valid_lft 2591994sec preferred_lft 604794sec
    inet6 fe80::f816:3eff:fe71:ca8d/64 scope link
       valid_lft forever preferred_lft forever
debian@test:~$ ip -6 route
::1 dev lo proto kernel metric 256 pref medium
2001:738:0:527::/64 dev eth0 proto kernel metric 256 expires 2591990sec pref 
medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::f816:3eff:fea1:7e69 dev eth0 proto ra metric 1024 expires 
251sec hoplimit 64 pref medium
default via fe80::5:73ff:fea0:2cf dev eth0 proto ra metric 1024 expires 1790sec 
hoplimit 64 pref medium


As you can see, I'v got two default routes, where the upper one is not ment to 
be there.

Could you point out something I missed, or there are some kind of bug,
which makes this?

Thanks:
 Peter ERDOSI (Fazy)

** Affects: neutron
     Importance: Undecided
         Status: New


** Tags: ipv6 ra-mode

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888256

Title:
  Neutron start radvd and mess up the routing table when:
  ipv6_ra_mode=not set  ipv6-address-mode=slaac

Status in neutron:
  New

Bug description:
  Hello!

  I would like to report a possible bug.
  We currently using Rocky with Ubuntu 18.04.
  We use custom ansible for deployment.

  We have a setup, where the upstream core Cisco nexus DC switches
  answers to RA-s. This works fine with a network, which we had for
  years (upgraded from kilo)

  Now, we made a new region, with new network nodes, etc. and the IPv6
  not works as in the old region.

  In the new region, we had this subnet:

  [PROD][root(cc1:0)] <~> openstack subnet show Flat1-subnet-v6
  +-------------------+------------------------------------------------------+
  | Field             | Value                                                |
  +-------------------+------------------------------------------------------+
  | allocation_pools  | 2001:738:0:527::2-2001:738:0:527:ffff:ffff:ffff:ffff |
  | cidr              | 2001:738:0:527::/64                                  |
  | created_at        | 2020-07-01T22:59:53Z                                 |
  | description       |                                                      |
  | dns_nameservers   |                                                      |
  | enable_dhcp       | True                                                 |
  | gateway_ip        | 2001:738:0:527::1                                    |
  | host_routes       |                                                      |
  | id                | a5a9991c-62f3-4f46-b1ef-e293dc0fb781                 |
  | ip_version        | 6                                                    |
  | ipv6_address_mode | slaac                                                |
  | ipv6_ra_mode      | None                                                 |
  | name              | Flat1-subnet-v6                                      |
  | network_id        | fa55bfc7-ab42-4d97-987e-645cca7a0601                 |
  | project_id        | b48a9319a66e45f3b04cc8bb70e3113c                     |
  | revision_number   | 0                                                    |
  | segment_id        | None                                                 |
  | service_types     |                                                      |
  | subnetpool_id     | None                                                 |
  | tags              |                                                      |
  | updated_at        | 2020-07-01T22:59:53Z                                 |
  +-------------------+------------------------------------------------------+

  As you can see, the address mode is SLAAC, the RA mode is: None.

  Checking from network node, we see the qrouter:

  [PROD][root(net1:0)] </home/ocadmin> ip netns exec 
qrouter-4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1 ip a
  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      inet 127.0.0.1/8 scope host lo
         valid_lft forever preferred_lft forever
      inet6 ::1/128 scope host
         valid_lft forever preferred_lft forever
  35: ha-5dfb8647-f7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
      link/ether fa:16:3e:1c:4d:8d brd ff:ff:ff:ff:ff:ff
      inet 169.254.192.3/18 brd 169.254.255.255 scope global ha-5dfb8647-f7
         valid_lft forever preferred_lft forever
      inet 169.254.0.162/24 scope global ha-5dfb8647-f7
         valid_lft forever preferred_lft forever
      inet6 fe80::f816:3eff:fe1c:4d8d/64 scope link
         valid_lft forever preferred_lft forever
  36: qr-a6d7ceab-80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
state UNKNOWN group default qlen 1000
      link/ether fa:16:3e:a1:7e:69 brd ff:ff:ff:ff:ff:ff
      inet 193.224.218.251/24 scope global qr-a6d7ceab-80
         valid_lft forever preferred_lft forever
      inet6 2001:738:0:527:f816:3eff:fea1:7e69/64 scope global nodad
         valid_lft forever preferred_lft forever
      inet6 fe80::f816:3eff:fea1:7e69/64 scope link nodad
         valid_lft forever preferred_lft forever

  If I check the running process on our net1 node, I got this:

  [PROD][root(net1:0)] </home/ocadmin> ps aux |grep radvd |grep 
4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1
  neutron  32540  0.0  0.0  19604  2372 ?        Ss   júl02   0:05 radvd -C 
/var/lib/neutron/ra/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.radvd.conf -p 
/var/lib/neutron/external/pids/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.pid.radvd 
-m syslog -u neutron

  
  The specific radvd config:
  [PROD][root(net1:0)] </home/ocadmin> cat 
/var/lib/neutron/ra/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.radvd.conf
  interface qr-a6d7ceab-80
  {
     AdvSendAdvert on;
     MinRtrAdvInterval 30;
     MaxRtrAdvInterval 100;
     AdvLinkMTU 1500;
  };

  If I spin up an instance, I see this:

  debian@test:~$ ip a
  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      inet 127.0.0.1/8 scope host lo
         valid_lft forever preferred_lft forever
      inet6 ::1/128 scope host
         valid_lft forever preferred_lft forever
  2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
      link/ether fa:16:3e:71:ca:8d brd ff:ff:ff:ff:ff:ff
      inet 193.224.218.9/24 brd 193.224.218.255 scope global dynamic eth0
         valid_lft 86353sec preferred_lft 86353sec
      inet6 2001:738:0:527:f816:3eff:fe71:ca8d/64 scope global dynamic 
mngtmpaddr
         valid_lft 2591994sec preferred_lft 604794sec
      inet6 fe80::f816:3eff:fe71:ca8d/64 scope link
         valid_lft forever preferred_lft forever
  debian@test:~$ ip -6 route
  ::1 dev lo proto kernel metric 256 pref medium
  2001:738:0:527::/64 dev eth0 proto kernel metric 256 expires 2591990sec pref 
medium
  fe80::/64 dev eth0 proto kernel metric 256 pref medium
  default via fe80::f816:3eff:fea1:7e69 dev eth0 proto ra metric 1024 expires 
251sec hoplimit 64 pref medium
  default via fe80::5:73ff:fea0:2cf dev eth0 proto ra metric 1024 expires 
1790sec hoplimit 64 pref medium

  
  As you can see, I'v got two default routes, where the upper one is not ment 
to be there.

  Could you point out something I missed, or there are some kind of bug,
  which makes this?

  Thanks:
   Peter ERDOSI (Fazy)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to