apport information
** Attachment added: "ProcCpuinfoMinimal.txt"
https://bugs.launchpad.net/bugs/1891259/+attachment/5402960/+files/ProcCpuinfoMinimal.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
apport information
** Tags added: apport-collected bionic uec-images
** Description changed:
Simpler reproducer:
$ snap version
snap2.42.1+18.04
snapd 2.42.1+18.04
series 16
ubuntu 18.04
kernel 4.15.0-91-generic
$ snap info --verbose etcd | grep base:
base:
snap ack vs --dangerous doesn't make a difference in this case I
believe.
$ snap download etcd
$ snap download core18
$ sudo iptables -A OUTPUT -p tcp --destination-port 80 -j REJECT --reject-with
tcp-reset
$ sudo iptables -A OUTPUT -p tcp --destination-port 443 -j REJECT --reject-with
Unsubscribing ~field-high for the time being. I've been told that Air-gapped
mode of Snap Store Proxy which is in a closed internal beta currently might be
a better solution in the long term so I will try it out.
https://docs.ubuntu.com/snap-store-proxy/en/airgap
--
You received this bug
In terms of the etcd snap, the bump of the base image happened around April
according to the git history:
https://github.com/juju-solutions/etcd-snaps/commit/3fa8df6aaacb32add9fb40ac297894765f6b0746#diff-5fde7a6d86053f0e1d88c0a2a238941f
It's not a regression strictly speaking, but the bump seems
Subscribing ~field-high.
This is breaking a document behavior around the local resource upload
through the charm and it affects both OpenStack and Kubernetes
deployments with the air-gapped requirement because of Vault's
dependency.
A quick and dirty workaround is to copy the core snap into the
It looks like the snap requires *both* core and core18 preinstalled to
skip the prerequisite checks for the etcd snap. However, it doesn't make
sense since the snap explicitly states it depends on core18.
$ snap info --verbose etcd | grep base:
base: core18
Also, in terms of the etcd charm, it's
** Summary changed:
- charm fails at 'Ensure prerequisites for "etcd" are available' in air-gapped
environments
+ snap installation fails at 'Ensure prerequisites for "etcd" are available' in
air-gapped environments
** Description changed:
+ Simpler reproducer:
+
+ $ snap version
+ snap
The current version in the archive is 2.3~rc2-1. Would be nice to have a
release tagged officially instead of a release candidate.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871283
Title:
Sync
Public bug reported:
Please sync resource-agents-paf 2.3.0-1 (universe) from Debian unstable
(main)
Changelog entries since current focal version 2.3~rc2-1:
resource-agents-paf (2.3.0-1) unstable; urgency=low
* 2.3.0 major release
-- Jehan-Guillaume (ioguix) de Rorthais Mon, 09 Mar
2020
Here you are:
$ grep . /sys/devices/system/cpu/vulnerabilities/*
/sys/devices/system/cpu/vulnerabilities/itlb_multihit:KVM: Mitigation: Split
huge pages
/sys/devices/system/cpu/vulnerabilities/l1tf:Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable
> @Nobuto - has your system any of the above kernel parameters set
manually?
No, we don't have any flags in kernel parameters related to tsx or
similar.
FWIW, I haven't tested any older kernel to check if those flags are
available. But we are using Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz.
--
The general protection fault was reproducible with the current 5.3 kernel as
follows by creating 10 SR-IOV sequentially. After updating it to the -proposed
one as
linux-image-5.3.0-25-generic 5.3.0-25.27, there is no such general protection
fault happened with the same operations. So we can
The general protection fault is reproducible with the current 5.3 kernel as
follows by creating 10 SR-IOV enabled VMs with OpenStack and deleting 5 VMs
sequentially. After updating it to the -proposed one as
linux-image-5.3.0-25-generic 5.3.0-25.27, there is no such general protection
fault
The general protection fault is reproducible with the current 5.3 kernel as
follows by creating 10 SR-IOV enabled VMs with OpenStack and deleting 5 VMs
sequentially. After updating it to the -proposed one as
linux-image-5.3.0-25-generic 5.3.0-25.27, there is no such general protection
fault
I'm not sure I'm following the discussion here, but I see hle and rtm
flags with the latest security update for bionic GA kernel. So looks
like the issue is not reproducible any longer. Am I missing something?
$ dpkg -l | grep linux-image
ii linux-image-4.15.0-72-generic 4.15.0-72.81
Hi Christian,
Thank you for the detailed response. Just to clarify, I'm not pursuing
to use nested KVM here actually, but to have a consistent flag across
multiple hosts so live-migration of the first level KVM VMs won't fail
with:
> [instance: afd27b8f-30df-4eab-b18a-5c269ce97d06] Live
Public bug reported:
Ubuntu bionic
qemu-system-x86: 1:2.11+dfsg-1ubuntu7.20
When installing qemu-system-x86, nested KVM will be enabled by default
thanks to the file offered by the package:
/etc/modprobe.d/qemu-system-x86.conf:options kvm_intel nested=1
and postinst
[eoan]
w/o -proposed
multipass@eoan:~$ dpkg -L linux-modules-5.3.0-23-generic | egrep 'i40e|iavf'
/lib/modules/5.3.0-23-generic/kernel/drivers/net/ethernet/intel/i40e
/lib/modules/5.3.0-23-generic/kernel/drivers/net/ethernet/intel/i40e/i40e.ko
multipass@eoan:~$ modinfo iavf
modinfo: ERROR:
[disco]
w/o -proposed
multipass@disco:~$ dpkg -L linux-modules-5.0.0-36-generic | egrep 'i40e|iavf'
/lib/modules/5.0.0-36-generic/kernel/drivers/net/ethernet/intel/i40e
/lib/modules/5.0.0-36-generic/kernel/drivers/net/ethernet/intel/i40e/i40e.ko
multipass@disco:~$ modinfo iavf
modinfo: ERROR:
[bionic]
w/o -proposed
ubuntu@bionic:~$ dpkg -L linux-modules-4.15.0-70-generic | grep i40e
/lib/modules/4.15.0-70-generic/kernel/drivers/net/ethernet/intel/i40e
/lib/modules/4.15.0-70-generic/kernel/drivers/net/ethernet/intel/i40e/i40e.ko
ubuntu@bionic:~$ modinfo i40evf
modinfo: ERROR: Module
Marking it back to Fix Committed as I got a clarification.
https://lists.ubuntu.com/archives/kernel-team/2019-November/105480.html
** Changed in: linux (Ubuntu Focal)
Status: In Progress => Fix Committed
** Changed in: livecd-rootfs (Ubuntu Bionic)
Status: New => Invalid
**
I'm just not sure how it can be merged to focal so raised a question:
https://lists.ubuntu.com/archives/kernel-team/2019-November/105393.html
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848481
** Changed in: linux (Ubuntu Disco)
Assignee: gerald.yang (gerald-yang-tw) => Nobuto Murata (nobuto)
** Changed in: linux (Ubuntu Eoan)
Assignee: gerald.yang (gerald-yang-tw) => Nobuto Murata (nobuto)
** Changed in: linux (Ubuntu Focal)
Status: Fix Committed => In
When can we expect this to be SRUed?
** Also affects: curtin (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1844543
Title:
timeout removing
I've filed a follow-up bug of neutron-openvswitch on kernel upgrade:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1851764
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834213
Title:
SRU request to D/E/F:
https://lists.ubuntu.com/archives/kernel-team/2019-November/105221.html
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848481
Title:
cloudimg: no iavf/i40evf module so no
** No longer affects: livecd-rootfs (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848481
Title:
cloudimg: no iavf/i40evf module so no network available with SR-IOV
enabled cloud
To
Posted again with a proper subscription instead of guest post:
https://lists.ubuntu.com/archives/kernel-team/2019-October/104898.html
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848481
Title:
Have sent to kernel-t...@lists.ubuntu.com
** Patch added: "0001-UBUNTU-Packaging-include-iavf-i40evf-in-generic.patch"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1848481/+attachment/5298618/+files/0001-UBUNTU-Packaging-include-iavf-i40evf-in-generic.patch
--
You received this bug
ntu)
Status: Confirmed => In Progress
** Changed in: linux (Ubuntu)
Assignee: (unassigned) => Nobuto Murata (nobuto)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848481
Title:
cloudimg
Hmm, ixgbevf is in linux-modules-*-generic on bionic instead of modules-
extra. So I'd like i40evf in the same modules-generic set.
linux-modules-4.15.0-65-generic:
/lib/modules/4.15.0-65-generic/kernel/drivers/net/ethernet/intel/ixgbevf/ixgbevf.ko
--
You received this bug notification because
** Summary changed:
- cloudimg: no i40evf module is available so no network with SR-IOV enabled
cloud
+ cloudimg: no i40evf module is available so no network available with SR-IOV
enabled cloud
** Summary changed:
- cloudimg: no i40evf module is available so no network available with SR-IOV
*** This bug is a duplicate of bug 1848481 ***
https://bugs.launchpad.net/bugs/1848481
** Package changed: compiz-plugins-main (Ubuntu) => linux (Ubuntu)
** This bug has been marked a duplicate of bug 1848481
cloudimg: no i40evf module is available so no network with SR-IOV enabled
cloud
Public bug reported:
For example with bionic, i40evf module is in linux-modules-
extra-4.15*-generic. However, cloudimg doesn't have linux-modules-extra
seeded:
$ curl -s
http://cloud-images.ubuntu.com/releases/bionic/release/ubuntu-18.04-server-cloudimg-amd64.manifest
| grep linux-
linux-base
** Changed in: deja-dup
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847706
Title:
Deja Dup still tells python-gi (python 2) is required while duplicity
is now
FWIW, "python-gi" is probably from:
debian/rules
override_dh_auto_configure:
dh_auto_configure -- \
--libexecdir=/usr/lib \
-Dgvfs_pkgs=gvfs-backends,python-gi,gir1.2-glib-2.0 \
-Dboto_pkgs=python-boto \
** Attachment added: "Screenshot from 2019-10-11 12-35-40.png"
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1847706/+attachment/5296436/+files/Screenshot%20from%202019-10-11%2012-35-40.png
** Also affects: deja-dup
Importance: Undecided
Status: New
--
You received this
Public bug reported:
Please see the screenshot attached. When running a backup operation
against a remote server with SSH, Deja Dup stops by saying "python-gi"
package is required.
However, duplicity is updated with Python3 now, so it doesn't make sense
to install Python 2 based package. The
It looks like images.maas.io has an older image with cloud-init 19.2-24
-ge7881d5c-0ubuntu1~18.04.1 (the version before the regression was
introduced) as 20191004 (the latest as of right now).
$ curl -s
https://images.maas.io/ephemeral-v3/daily/bionic/amd64/20191003/squashfs.manifest
| grep -w
** Also affects: cloud-init (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1846535
Title:
cloud-init 19.2.36 fails with python exception "Not all
** Patch added: "openscap_1.2.16-1_1.2.16-2.debdiff"
https://bugs.launchpad.net/ubuntu/+source/openscap/+bug/1845216/+attachment/5292704/+files/openscap_1.2.16-1_1.2.16-2.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
The dkms module as 7906-0ubuntu1 in the PPA seems not compatible with Ubuntu
bionic. The build fails with two unrecognized flags which had been added here:
https://code.launchpad.net/~canonical-hwe-team/+git/backport-iwlwifi-dkms/+merge/370163#diff-line-368
cc -Wall -Wmissing-prototypes
Public bug reported:
/usr/share/openscap/cpe/openscap-cpe-dict.xml is included in later versions
such as 1.2.16-2:
https://packages.debian.org/buster/amd64/libopenscap8/filelist
How to reproduce with Ubuntu 18.04 LTS:
$ sudo apt install libopenscap8 ssg-debderived
$ oscap info
FWIW, I was able to use WPS push botton with 19.10 as of today. There was a
message as "Alternatively you can connect by pushing the “WPS” button on your
router" and it just worked for me. The UI patch seems merged in upstream 7
months ago.
Indeed, the upstream of Takao fonts as IPA fonts has a new release, so
we need to release a new one too.
https://ipafont.ipa.go.jp/
> 2019-04-26
>
> IPAex Fonts (Ver.004.01) were released. Detail information is shown in
> release notes.
>
> Note: SQUARE ERA NAME REIWA(U+32FF) has been added.
Subscribing ~field-high because:
- it's blocking one of the user acceptance testing
- upgrading Queens to Rocky using charms is a reasonable expectation
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
might be related:
https://bugs.launchpad.net/cloud-archive/+bug/1681758
https://bugs.launchpad.net/nova-lxd/+bug/1808388
When nova-lxd is deployed with firewall-driver=openvswitch, launching
instance fails with:
2019-02-14 15:28:20.907 21979 INFO os_vif
bundle with bionic-queens
One instance succeeded with the bundle, then
$ juju config neutron-openvswitch firewall-driver=openvswitch
has been set.
After Juju settles down, the next instance was launched, but failed with
the error.
** Attachment added: "nova-lxd-minimal-bundle_dvr.yaml"
** Attachment added:
"juju-crashdump-85ddd689-1901-411f-a54c-5c06de84bfa3.tar.xz"
https://bugs.launchpad.net/ubuntu/+source/nova-lxd/+bug/1815922/+attachment/5238638/+files/juju-crashdump-85ddd689-1901-411f-a54c-5c06de84bfa3.tar.xz
--
You received this bug notification because you are a
Yes, that totally depends on the definition of the option "--supported
list of all supported stable versions". I was confused initially, but if
that's by design, we don't need to change the behavior. Sorry for the
noise.
--
You received this bug notification because you are a member of Ubuntu
I think we need a condition "date >= x.release" like an attached diff
(it's only for python though).
** Patch added: "date_x_release.diff"
https://bugs.launchpad.net/ubuntu/+source/distro-info/+bug/1727751/+attachment/5157008/+files/date_x_release.diff
--
You received this bug notification
** Description changed:
The current development series (cosmic as of today) should not be listed
up as "supported stable versions".
+
+ $ ubuntu-distro-info --date 2018-06-27 --devel -f
+ Ubuntu 18.10 "Cosmic Cuttlefish"
+
+ $ ubuntu-distro-info --date 2018-06-27 --supported -f
+ Ubuntu
I didn't mean that way. Let me set it back to ditro-info package and
update the title and description.
** Package changed: distro-info-data (Ubuntu) => distro-info (Ubuntu)
** Changed in: distro-info (Ubuntu)
Status: Fix Released => New
** Summary changed:
- ubuntu-distro-info shows
The patch has been backported to stable/18.02 so marking it as Fix
Released.
** Changed in: charm-neutron-gateway
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
So, /var/log/neutron/neutron-lbaasv2-agent.log had:
"WARNING neutron_lbaas.drivers.haproxy.namespace_driver [-] Error while
connecting to stats socket: [Errno 13] EACCES: error: [Errno 13] EACCES"
with aa-profile-mode=complain.
After setting aa-profile-mode=disabled (juju config --reset), it
I may be completely wrong, but one possible reason to cause 503 from
haproxy is AppArmor.
@Xav, what happens if you disable apparmor, i.e. aa-disable /usr/bin
/neutron-lbaasv2-agent?
As you see in an unrelated bug[1], the apparmor profile installed by
neutron-gateway charm blocks lbaasv2 if it's
#146~lp1753662ThreeCommits is better at some level (around 40% failure
rate to 20%).
Failure rate: 187/470 (39.8%), 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2
16:08:24 UTC 2018
Failure rate: 87/222 (39.2%), 4.4.0-120-generic #144-Ubuntu SMP Thu Apr 5
14:11:49 UTC 2018
Failure rate: 138/712
Just for the record, up-to-date numbers after the weekend.
Failure rate: 167/422 (39.6%), 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2
16:08:24 UTC 2018
Failure rate: 87/222 (39.2%), 4.4.0-120-generic #144-Ubuntu SMP Thu Apr 5
14:11:49 UTC 2018
Failure rate: 117/726 (16.1%),
Ok, we have some numbers with the new host.
Failure rate: 45/112 (40.2%), 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2
16:08:24 UTC 2018
Failure rate: 87/222 (39.2%), 4.4.0-120-generic #144-Ubuntu SMP Thu Apr 5
14:11:49 UTC 2018
Failure rate: 117/726 (16.1%), 4.4.0-040400-generic
Just for the record, I'm using the attached rc.local for testing.
** Attachment added: "rc.local"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1753662/+attachment/5116629/+files/rc.local
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
I ran HWE 4.13 just to make sure the result is the same as the previous
host. And as we confirmed before, the issue is not reproducible with HWE
4.13.
[HWE 4.13]
Failure rate: 0/407 (0.0%), 4.13.0-38-generic #43~16.04.1-Ubuntu SMP Wed Mar 14
17:48:43 UTC 2018
> We should first confirm that
I ran HWE 4.13 just to make sure the result is the same as the previous
host. And as we confirmed before, the issue is not reproducible with HWE
4.13.
[stock xenial]
Failure rate: 117/726 (16.1%), 4.4.0-040400-generic #201803261439 SMP Mon Mar
26 14:43:35 UTC 2018
[HWE 4.13]
Failure rate: 0/407
FWIW, I tried PCI hot-plugging to try another way for faster iterations without
reboot.
https://paste.ubuntu.com/p/qDVkMcTYPQ/
However, the issue wasn't reproducible with hot-plugging. Rebooting is
the easiest reproduction so far.
--
You received this bug notification because you are a member
4.4 kernel using the Artful configs didn't make much difference.
Failure rate: 117/726 (16.1%), 4.4.0-040400-generic #201803261439 SMP
Mon Mar 26 14:43:35 UTC 2018
I will let stock 4.4 and 4.13 hwe run just to make sure to know the
occurrence rate with this host.
--
You received this bug
FWIW, kernel trace happens with the kernel in:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1753662/comments/77
But I will let it running anyway since I'm not sure if it affects to the
testing or not.
[5.999557] rtc_cmos 00:00: setting system clock to 2018-04-11 15:52:23 UTC
Finally got a machine up and running. Will resume testing shortly.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1753662
Title:
[i40e] LACP bonding start up race conditions
To manage notifications
The new environment is not fully up yet to test. ETA would be by the end
of this week.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1753662
Title:
[i40e] LACP bonding start up race conditions
To
bscriptions
>
> Launchpad-Notification-Type: bug
> Launchpad-Bug: distribution=ubuntu; sourcepackage=linux; component=main;
> status=Triaged; importance=High; assignee=joseph.salisb...@canonical.com;
> Launchpad-Bug: distribution=ubuntu; distroseries=xenial;
> sourcepackage=linux; compo
4.12-rc4 kernel with Xenial configs looks bad.
[xenial configs]
v4.12-rc3 #201803161156 - relatively bad (36 of 249 - 14.5%)
v4.12.0-041200rc3 #201803191316 - relatively bad (21 of 150 - 14.0%)
ff5a20169b98d84ad8d7f99f27c5ebbb008204d6
v4.12.0-041200rc3 #20180324 - relatively bad (60 of 499 -
BTW, have we set the baseline of "good" in this bisection with xenial
configs?
> 4.13.0-36(xenial HW) - good (0 of 119 - 0%)
Does HWE kernel man with xenial configs? Or was it built with the source
release config i.e. artful?
--
You received this bug notification because you are a member of
Correction: Does HWE kernel mean it's with xenial configs? Or was it
built with the source release config i.e. artful?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1753662
Title:
[i40e] LACP
55cbdaf6399de16b61d40d49b6c8bb739a877dea looks bad.
[xenial configs]
v4.12-rc3 #201803161156 - relatively bad (36 of 249 - 14.5%)
v4.12.0-041200rc3 #201803191316 - relatively bad (21 of 150 - 14.0%)
ff5a20169b98d84ad8d7f99f27c5ebbb008204d6
v4.12.0-041200rc3 #20180324 - relatively bad (60 of
I was pretty occupied today, so I'm going to test
55cbdaf6399de16b61d40d49b6c8bb739a877dea now and report back my tomorrow
morning.
[xenial configs]
v4.12-rc3 #201803161156 - relatively bad (36 of 249 - 14.5%)
v4.12.0-041200rc3 #201803191316 relatively bad (21 of 150 - 14.0%)
ea094f3c830a67f252677aacba5d04ebcf55c4d9 looks bad.
[xenial configs]
v4.12-rc3 #201803161156 - relatively bad (36 of 249 - 14.5%)
v4.12.0-041200rc3 #201803191316 relatively bad (21 of 150 - 14.0%)
ff5a20169b98d84ad8d7f99f27c5ebbb008204d6
v4.12.0-041200rc3 #20180324 relatively bad (6 of 56 -
ff5a20169b98d84ad8d7f99f27c5ebbb008204d6 looks bad.
[xenial configs]
v4.12-rc3 #201803161156 - relatively bad (36 of 249 - 14.5%)
v4.12.0-041200rc3 #201803191316 relatively bad (21 of 150 - 14.0%)
ff5a20169b98d84ad8d7f99f27c5ebbb008204d6
--
You received this bug notification because you are a
ff5a20169b98d84ad8d7f99f27c5ebbb008204d6 looks bad.
[xenial configs]
v4.12-rc3 #201803161156 - relatively bad (36 of 249 - 14.5%)
v4.12.0-041200rc3 #201803191316 relatively bad (21 of 150 - 14.0%)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
Ok, we see some differences with the three kernels. How do we want to
proceed from here?
v4.12-rc3 - bad (24 of 90 - 26.6%)
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.12-rc3/
v4.12-rc3 #201803151851 - relatively good (2 of 151 - 1.3%)
** Attachment added:
"bond_check_xenial_4.12.0-041200rc3-generic_201803151851.log"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1753662/+attachment/5081137/+files/bond_check_xenial_4.12.0-041200rc3-generic_201803151851.log
--
You received this bug notification because you are a
The new build of v4.12-rc3 is a good build (2 of 151).
4.12.0-041200rc3-generic #201803151851
v4.12-rc3 - bad (24 of 90 - 26.6%)
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.13-rc3/
v4.12-rc3 - relatively good (2 of 151 - 1.3%)
http://kernel.ubuntu.com/~jsalisbury/lp1753662/v4.12-rc3/
So
v4.12-rc4 is good, 1 of 146. Going to test v4.12-rc3.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1753662
Title:
[i40e] LACP bonding start up race conditions
To manage notifications about this
> We may have went wrong somewhere in the bisect. However, just to be sure, I
> built a v4.12-rc4 test kernel. This kernel should be bad and contain the bug.
> If it does not, it may be due to the configs I'm using to build the test
> kernels.
I'm not following since I thought we tested that
** Attachment added: "bond_check_xenial_4.12.0-041200rc1_201803141835.log"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1753662/+attachment/5080093/+files/bond_check_xenial_4.12.0-041200rc1_201803141835.log
--
You received this bug notification because you are a member of Ubuntu
4681ee21d62cfed4364e09ec50ee8e88185dd628 looks good.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12.0-041200rc1 #201803141835 - relatively good (1 of 113 - 0.9%)
4681ee21d62cfed4364e09ec50ee8e88185dd628
171d8b9363725e122b164e6b9ef2acf2f751e387 looks good. The next test is
with 4681ee21d62cfed4364e09ec50ee8e88185dd628.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12.0-041200rc1 #201803141333 - relatively good (1 of 217 -
The test is still in progress, but so far
171d8b9363725e122b164e6b9ef2acf2f751e387 looks good (0 of 21). Since I
already downloaded the kernel locally, please go ahead to build the next
one. Thanks,
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
d38162e4b5c643733792f32be4ea107c831827b4 looks good.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12.0-041200rc1 #201803131457 - relatively good (1 of 93 - 1.1%)
d38162e4b5c643733792f32be4ea107c831827b4
v4.12.0-041200rc3
The test is still in progress, but so far
d38162e4b5c643733792f32be4ea107c831827b4 looks good (1 of 37). Since I
already downloaded the kernel locally, please go ahead to build the next
one. Thanks,
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Oh wait,
> I built the next test kernel, up to the following commit:
> d38162e4b5c643733792f32be4ea107c831827b4
>
> The test kernel can be downloaded from:
> http://kernel.ubuntu.com/~jsalisbury/lp1753662
d38162e4b5c643733792f32be4ea107c831827b4 looks in-between v4.12-rc3 and rc4
which is
@Joseph,
Will do. Just as a possibility, I could build a kernel on the host if
that's helpful. Because the host is already reserved for this testing
and has hundreds of GBs of memory and many CPU cores. If you have a
pointer how to replicate your build process, that would be great.
--
You
25f480e89a022d382ddc5badc23b49426e89eabc looks good.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12.0-041200rc3 #201803121355 - relatively good (1 of 252 - 0.4%)
25f480e89a022d382ddc5badc23b49426e89eabc
400129f0a3ae989c30b37104bbc23b35c9d7a9a4 looks good.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12.0-041200rc3 #201803090724 - relatively good (2 of 77 - 2.6%)
400129f0a3ae989c30b37104bbc23b35c9d7a9a4
0bb230399fd337cc9a838d47a0c9ec3433aa612e seems good. I'm ready for the
next test.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12.0-041200rc3 #201803081620 - relatively good (1 of 36 - 2.8%)
** Attachment added: "bond_check_xenial_hwe_4.13_full.log"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1753662/+attachment/5073884/+files/bond_check_xenial_hwe_4.13_full.log
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I run xenial HWE over a night while sleeping, the result was 0/119. The next
test is with:
v4.12.0-041200rc3 #201803081620 - ? 0bb230399fd337cc9a838d47a0c9ec3433aa612e
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
4.12.0-041200rc3-generic #201803080803 looks good. Please proceed to the
next one. I will test it my tomorrow which would be 12 hours later from
now.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12.0-041200rc3 - good (0
So far 0 of 6 with 4.12.0-041200rc3-generic #201803080803. But I will
keep it running for a while to see if it becomes close to 30% or 0%.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1753662
Title:
Ok, 25% - 30% seems a baseline. I'd like to make sure v4.13 is really 0%
for longer running test, but will do the bisection of v4.12-rc3 and
v4.12-rc4 first.
4.4.0-116(xenial) - bad (9 of 31 - 29.0%)
v4.12-rc2 - bad (15 of 53 - 28.3%)
v4.12-rc3 - bad (24 of 90 - 26.6%)
v4.12-rc4 - relatively
With rc2 result. It looks like there is a noticeable difference between
v4.12-rc3 and v4.12-rc4.
@Joseph, can you please start looking into diffs? I'm keeping one
dedicated node just for this testing, so I can run the same script one
by one for more bisections.
v4.12-rc1 - bad (3 of 3)
v4.12-rc2
With rc3, will test rc2 next.
v4.12-rc1 - bad (3 out of 3)
v4.12-rc3 - mixture result (24 out of 90)
v4.12-rc4 - relatively good (1 out of 70)
v4.12 - relatively good (5 out of 68)
v4.13 - good (0 out of 41)
** Attachment added: "bond_check_xenial_mainline_4.12-rc3_full.log"
101 - 200 of 1262 matches
Mail list logo