[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-08-03 Thread James Page
** Changed in: charm-neutron-openvswitch
Milestone: 20.08 => None

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-07-22 Thread James Page
I think this is related to bug 1831935 - where the recommendation is
that the use of veth is disabled in DPDK deployments to avoid the
checksumming issues - see the 'ovs-use-veth' configuration option.  This
is a breaking change but the charm should stop you doing anything that
will break things.

That said there are inflight fixes to disable checksumming when veth is
in use via Neutron to ensure that either configuration works.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-07-22 Thread James Page
bug 1832021 looks very similar and some changes landed across releases
this month to disable checksum calcs for veth interfaces when in use
with DPDK (stein/train/ussuri/master branches).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-06-04 Thread Andrea Ieri
Marking as field-high as this now affects a live cloud, and the
workaround (lowering the MTU within the qdhcp namespaces) isn't fully
persistent.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-05-21 Thread David Ames
** Changed in: charm-neutron-openvswitch
Milestone: 20.05 => 20.08

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-03-03 Thread Andreas Hasenack
Does anybody know if upstream responded elsewhere?
https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html
shows no thread reply.

Wouldn't it be best to open a bug instead?

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to dpdk in Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-03-03 Thread Andreas Hasenack
Does anybody know if upstream responded elsewhere?
https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html
shows no thread reply.

Wouldn't it be best to open a bug instead?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2020-03-02 Thread James Page
** Changed in: charm-neutron-openvswitch
Milestone: 20.01 => 20.05

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-10-24 Thread David Ames
** Changed in: charm-neutron-openvswitch
Milestone: 19.10 => 20.01

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-25 Thread Andre Ruiz
I'm hitting this bug in a client installation, Bionic / Queens. Just spent a 
few hours debugging and in the end came to exactly the same tests and 
conclusion.

Using DPDK, using bond for dpdk, using isolated metadata (as this is
provider only networks). I *can* send data to the netns up to 9000 and
it gets there intact. Only packets going out are truncated to a little
more than 1500 bytes (IIRC about 1504 (ping works up to -s1476).

Checked every mtu on every port in the path of the netns to ovs to dpdk
to external switch, back to other node dpdk/ovs/virtual machine and none
of them seem wrong. Tcpdump on both the netns and the target vm show
packet leaving ok in the netns but arriving truncated on destination.

I manually set all MTUs in qdhcp namespaces tp 1500 and the problem is
gone. Not sure about any consequences of this, though.

Funny thing is that this problem did not appear with charm neutron-
openvswitch-next-359 but appeared after an upgrade to neutron-
openvswitch-next-367 *and* a compute host reboot. The reason for us
using this charms are related to other bugs that were fixed there, and
the upgrade was to finally fix one last bug about TCP checksum
corruption inside the netns (all this is explained in
https://bugs.launchpad.net/neutron/+bug/1832021/ ).

Well, at least the client deployed more than 70 VMs without problem, I
upgraded the charm about two weeks ago (things kept apparently ok) and a
few days ago I rebooted some compute nodes because of an unrelated
problem and this behavior appeared.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-18 Thread James Page
I'm marking this  bug as triaged - we can reproduce the issue outside of
openstack.

** Changed in: charm-neutron-openvswitch
   Status: New => Triaged

** Changed in: dpdk (Ubuntu)
   Status: New => Triaged

** Changed in: charm-neutron-openvswitch
   Importance: Undecided => High

** Changed in: dpdk (Ubuntu)
   Importance: Undecided => High

** Changed in: charm-neutron-openvswitch
Milestone: None => 19.10

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-17 Thread Andreas Hasenack
No reply yet in the ML.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-17 Thread Andreas Hasenack
No reply yet in the ML.

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to dpdk in Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-15 Thread Liam Young
Hi Christian,
Thanks for your comments. I'm sure you spotted it but just to make it 
clear, the issue occurs with bonded and unbonded dpdk interfaces. I've emailed 
upstream here *1.

Thanks
Liam


*1 https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-10 Thread Christian Ehrhardt 
Hi Liam,
all I can say about this area is that I heard from several others that MTU+Bond 
was very broken in the past - I haven't heard an update on that for a while. 
Thanks for trying 18.11.2 in that regard.

I've read through the case about but nothing obvious came up for me to
try or fix.

I think your intention (from your mail) to write to upstream about this
is absolutely the right approach. Please let me know here if you got
anything back that would help to consider to be added.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-09 Thread Liam Young
** Changed in: dpdk (Ubuntu)
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic

If a server has an ovs bridge with a dpdk device for external
network access and a network namespace attached then sending data out of
the namespace fails if jumbo frames are enabled. 

Setup:

root@node-licetus:~# uname -r
5.0.0-20-generic

root@node-licetus:~# ovs-vsctl show
523eab62-8d03-4445-a7ba-7570f5027ff6
Bridge br-test
Port "tap1"
Interface "tap1"
type: internal
Port br-test
Interface br-test
type: internal
Port "dpdk-nic1"
Interface "dpdk-nic1"
type: dpdk
options: {dpdk-devargs=":03:00.0"}
ovs_version: "2.11.0"

root@node-licetus:~# ovs-vsctl get Interface dpdk-nic1 mtu
9000

root@node-licetus:~# ip netns exec ns1 ip addr show tap1
12: tap1:  mtu 9000 qdisc fq_codel 
state UNKNOWN group default qlen 1000
link/ether 0a:dd:76:38:52:54 brd ff:ff:ff:ff:ff:ff
inet 10.246.112.101/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::8dd:76ff:fe38:5254/64 scope link 
   valid_lft forever preferred_lft forever


* Using iperf to send data out of the netns fails:

root@node-licetus:~# ip netns exec ns1 iperf -c 10.246.114.29

Client connecting to 10.246.114.29, TCP port 5001
TCP window size:  325 KByte (default)

[  3] local 10.246.112.101 port 51590 connected with 10.246.114.29 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.3 sec   323 KBytes   257 Kbits/sec

root@node-hippalus:~# iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

root@node-hippalus:~# 

* Switching the direction of flow and sending data into the namespace
works:

root@node-licetus:~# ip netns exec ns1 iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

[  4] local 10.246.112.101 port 5001 connected with 10.246.114.29 port 59454
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.0 sec  6.06 GBytes  5.20 Gbits/sec
[  4] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)

root@node-hippalus:~# iperf -c 10.246.112.101

Client connecting to 10.246.112.101, TCP port 5001
TCP window size:  942 KByte (default)

[  3] local 10.246.114.29 port 59454 connected with 10.246.112.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  6.06 GBytes  5.20 Gbits/sec

* Using iperf to send data out of the netns after dropping tap mtu
works:


root@node-licetus:~# ip netns exec ns1 ip link set dev tap1 mtu 1500
root@node-licetus:~# ip netns exec ns1 iperf -c 10.246.114.29

Client connecting to 10.246.114.29, TCP port 5001
TCP window size:  845 KByte (default)

[  3] local 10.246.112.101 port 51594 connected with 10.246.114.29 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   508 MBytes   426 Mbits/sec

root@node-hippalus:~# iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

[  4] local 10.246.114.29 port 5001 connected with 10.246.112.101 port 51594
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.1 sec   508 MBytes   424 Mbits/sec
[  4] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
Ubuntu: eoan
DPDK pkg: 18.11.1-3 
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic

If two servers each have an ovs bridge with a dpdk device for external
network access and a network namespace attached then communication
between taps in the namespaces fails if jumbo frames are enabled. If on one of 
the servers the external nic is switched so it is no longer managed by
dpdk then service is restored.

Server 1:

root@node-licetus:~# ovs-vsctl show
1fed66c2-b7af-477d-b035-0e1d78451f6e
Bridge br-test
Port br-test
Interface br-test
type: internal
Port "tap1"
Interface "tap1"
type: internal
Port "dpdk-nic1"
Interface "dpdk-nic1"
type: dpdk
options: {dpdk-devargs=":03:00.0"}
ovs_version: "2.11.0"

root@node-licetus:~# ovs-vsctl get Interface dpdk-nic1 mtu
9000

root@node-licetus:~# ip netns exec ns1 ip addr show tap1
11: tap1:  mtu 9000 qdisc fq_codel 
state UNKNOWN group default qlen 1000
link/ether 56:b1:9c:a3:de:81 brd ff:ff:ff:ff:ff:ff
inet 10.246.112.101/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::54b1:9cff:fea3:de81/64 scope link 
   valid_lft forever preferred_lft forever

Server 2:

root@node-hippalus:~# ovs-vsctl show
cd383272-d341-4be8-b2ab-17ea8cb63ae6
Bridge br-test
Port "dpdk-nic1"
Interface "dpdk-nic1"
type: dpdk
options: {dpdk-devargs=":03:00.0"}
Port br-test
Interface br-test
type: internal
Port "tap1"
Interface "tap1"
type: internal
ovs_version: "2.11.0"

root@node-hippalus:~# ovs-vsctl get Interface dpdk-nic1 mtu
9000

root@node-hippalus:~# ip netns exec ns1 ip addr show tap1
11: tap1:  mtu 9000 qdisc fq_codel 
state UNKNOWN group default qlen 1000
link/ether a6:f2:d8:59:d5:7d brd ff:ff:ff:ff:ff:ff
inet 10.246.112.102/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::a4f2:d8ff:fe59:d57d/64 scope link 
   valid_lft forever preferred_lft forever


Test:

root@node-licetus:~# ip netns exec ns1 iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)


root@node-hippalus:~# ip netns exec ns1 iperf -c 10.246.112.101

Client connecting to 10.246.112.101, TCP port 5001
TCP window size:  325 KByte (default)

[  3] local 10.246.112.102 port 52848 connected with 10.246.112.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.4 sec   323 KBytes   256 Kbits/sec


* If the mtu of either tap device is dropped to 1500 then the tests pass:

root@node-licetus:~# ip netns exec ns1 ip link set dev tap1 mtu 9000
root@node-licetus:~# ip netns exec ns1 iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

[  4] local 10.246.112.101 port 5001 connected with 10.246.112.102 port 52850
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.1 sec   502 MBytes   418 Mbits/sec
[  4] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

root@node-hippalus:~# ip netns exec ns1 ip link set dev tap1 mtu 1500
root@node-hippalus:~# ip netns exec ns1 iperf -c 10.246.112.101

Client connecting to 10.246.112.101, TCP port 5001
TCP window size:  748 KByte (default)

[  3] local 10.246.112.102 port 52850 connected with 10.246.112.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   502 MBytes   420 Mbits/sec


* If in server 2 the dpdk device is replaced with the same physical
  device but not managed by dpdk jumbo frame then the tests pass:

root@node-hippalus:~# ls -dl 
/sys/devices/pci:00/:00:02.0/:03:00.0/net/enp3s0f0
drwxr-xr-x 6 root root 0 Jul  8 14:04 
/sys/devices/pci:00/:00:02.0/:03:00.0/net/enp3s0f0

root@node-hippalus:~# ovs-vsctl show
cd383272-d341-4be8-b2ab-17ea8cb63ae6
Bridge br-test
Port "tap1"
Interface "tap1"
type: internal
Port br-test
Interface br-test
type: internal
Port "enp3s0f0"
Interface "enp3s0f0"
ovs_version: "2.11.0"

root@node-hippalus:~# ip netns exec ns1 ip addr show tap1
10: tap1:  mtu 9000 qdisc noqueue state 
UNKNOWN group default qlen 1000
link/ether ba:39:55:e2:b8:81 brd ff:ff:ff:ff:ff:ff
inet 10.246.112.102/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::b839:55ff:fee2:b881/64 scope link 
  

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
At some point when I was attempting to simplify the test case  I
dropped setting the mtu on the dpdk devices via ovs so the above test is
invalid. I've marked the bug against dpdk as invalid while I redo the
tests.

** Changed in: dpdk (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
Given the above I'm am going to mark this as affecting the dpdk package
rather than the charm

** Also affects: dpdk (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs