[Kernel-packages] [Bug 1226855] Re: Cannot use open-iscsi inside LXC container
Adding Bootstack to watch this bug, as we are taking ownership of charm- iscsi-connector which would be ideal to test within lxd confinement, but requires VM or metal for functional tests. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1226855 Title: Cannot use open-iscsi inside LXC container Status in linux package in Ubuntu: Confirmed Status in lxc package in Ubuntu: Confirmed Bug description: Trying to use open-iscsi from within an LXC container, but the iscsi netlink socket does not support multiple namespaces, causing: "iscsid: sendmsg: bug? ctrl_fd 6" error and failure. Command attempted: iscsiadm -m node -p $ip:$port -T $target --login Results in: Exit code: 18 Stdout: 'Logging in to [iface: default, target: $target, portal: $ip,$port] (multiple)' Stderr: 'iscsiadm: got read error (0/0), daemon died? iscsiadm: Could not login to [iface: default, target: $target, portal: $ip,$port]. iscsiadm: initiator reported error (18 - could not communicate to iscsid) iscsiadm: Could not log into all portals' ProblemType: Bug DistroRelease: Ubuntu 13.04 Package: lxc 0.9.0-0ubuntu3.4 ProcVersionSignature: Ubuntu 3.8.0-30.44-generic 3.8.13.6 Uname: Linux 3.8.0-30-generic x86_64 ApportVersion: 2.9.2-0ubuntu8.3 Architecture: amd64 Date: Tue Sep 17 14:38:08 2013 InstallationDate: Installed on 2013-01-15 (245 days ago) InstallationMedia: Xubuntu 12.10 "Quantal Quetzal" - Release amd64 (20121017.1) MarkForUpload: True SourcePackage: lxc UpgradeStatus: Upgraded to raring on 2013-05-16 (124 days ago) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1226855/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1834213] Re: After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances
oddly, this did not happen on all hosts with this version kernel, it was pseudo random and about ~30-40%. There must be another variable at play. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1834213 Title: After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances Status in OpenStack neutron-openvswitch charm: Incomplete Status in linux package in Ubuntu: Incomplete Bug description: With an environment running Xenial-Queens, and having just upgraded the linux-image-generic kernel for MDS patching, a few of our hypervisor hosts that were rebooted (3 out of 100) ended up dropping IP (tcp/udp) ingress traffic. It turns out that nf_conntrack module was loaded, but nf_conntrack_ipv4 was not loading, and the traffic was being dropped by this rule: table=72, n_packets=214989, priority=50,ct_state=+inv+trk actions=resubmit(,93) The ct_state "inv" means invalid conntrack state, which the manpage describes as: The state is invalid, meaning that the connection tracker couldn’t identify the connection. This flag is a catch- all for problems in the connection or the connection tracker, such as: • L3/L4 protocol handler is not loaded/unavailable. With the Linux kernel datapath, this may mean that the nf_conntrack_ipv4 or nf_conntrack_ipv6 modules are not loaded. • L3/L4 protocol handler determines that the packet is malformed. • Packets are unexpected length for protocol. It appears that there may be an issue when patching the OS of a hypervisor not running instances may fail to update initrd to load nf_conntrack_ipv4 (and/or _ipv6). I couldn't find anywhere in the charm code that this would be loaded unless the charm's "harden" option is used on nova-compute charm (see charmhelpers contrib/host templates). It is unset in our environment, so we are not using any special module probing. Did nf_conntrack_ipv4 get split out from nf_conntrack in recent kernel upgrades or is it possible that the charm should define a modprobe file if we have the OVS firewall driver configured? To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1834213/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1772671] Re: Kernel produces empty lines in /proc/PID/status
nevermind. I see the patch is kernel fix...will upgrade my host. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1772671 Title: Kernel produces empty lines in /proc/PID/status Status in iotop package in Ubuntu: Invalid Status in linux package in Ubuntu: Invalid Status in iotop source package in Xenial: Invalid Status in linux source package in Xenial: Fix Released Bug description: [Impact] The CVE-2018-3639 for Xenial introduced a double newline sequence in the /proc/PID/status files. This breaks some userspace tools, such as iotop, that parse those files. [Test Case] Incorrect output in 4.4.0-127.153-generic: $ cat /proc/self/status ... Seccomp: 0 Speculation_Store_Bypass: thread vulnerable ... Expected output: $ cat /proc/self/status ... Seccomp: 0 Speculation_Store_Bypass: thread vulnerable ... [Regression Potential] None [Original Report] Hello, after running updates today to linux- image-4.4.0-127-generic_4.4.0-127.153 and rebooting i noticed that iotop is not working any more. Reason are empty lines in /proc/PID/status, which confuse iotop (and me) In new view there is an empy line between Seccomp and Speculation_Store_Bypass: Seccomp:0 Speculation_Store_Bypass: vulnerable Speculation_Store_Bypass seems to be new in /proc/PID/status, may be a relation to spectre/meltdown patches. iotop is first application which is failing here, but iam afraid of more. Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iotop/+bug/1772671/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1772671] Re: Kernel produces empty lines in /proc/PID/status
This needs to be backported to trusty for users of the linux-image- generic-lts-xenial. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1772671 Title: Kernel produces empty lines in /proc/PID/status Status in iotop package in Ubuntu: Invalid Status in linux package in Ubuntu: Invalid Status in iotop source package in Xenial: Invalid Status in linux source package in Xenial: Fix Released Bug description: [Impact] The CVE-2018-3639 for Xenial introduced a double newline sequence in the /proc/PID/status files. This breaks some userspace tools, such as iotop, that parse those files. [Test Case] Incorrect output in 4.4.0-127.153-generic: $ cat /proc/self/status ... Seccomp: 0 Speculation_Store_Bypass: thread vulnerable ... Expected output: $ cat /proc/self/status ... Seccomp: 0 Speculation_Store_Bypass: thread vulnerable ... [Regression Potential] None [Original Report] Hello, after running updates today to linux- image-4.4.0-127-generic_4.4.0-127.153 and rebooting i noticed that iotop is not working any more. Reason are empty lines in /proc/PID/status, which confuse iotop (and me) In new view there is an empy line between Seccomp and Speculation_Store_Bypass: Seccomp:0 Speculation_Store_Bypass: vulnerable Speculation_Store_Bypass seems to be new in /proc/PID/status, may be a relation to spectre/meltdown patches. iotop is first application which is failing here, but iam afraid of more. Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iotop/+bug/1772671/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1759787] Re: Very inaccurate TSC clocksource with kernel 4.13 on selected CPUs
** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1759787 Title: Very inaccurate TSC clocksource with kernel 4.13 on selected CPUs Status in linux package in Ubuntu: Triaged Status in linux source package in Artful: Won't Fix Bug description: On kernel 4.13.0-37-generic, HP ProLiant DL380 Gen10 systems have been observed with very large clock offsets, as measured by NTP. Over the past few days on one of our production systems, we've used 3 different kernels: https://pastebin.ubuntu.com/p/nDkkgRqdtv/ All of these kernels default to the TSC clocksource, which is supposed to be very reliable on Skylake-X CPUs. On 4.4 (linux-image-generic- lts-xenial) it works as expected; on 3.13 (trusty default kernel) it works a little worse, and on 4.13 (linux-image-generic-hwe-16.04) it is much worse. Today I switched 4.13 from the TSC clocksource to the HPET clocksource and it improved the situation dramatically. I've produced loopstats & peerstats graphs from NTP corresponding to the dates in the pastebin above and placed them at https://people.canonical.com/~paulgear/ntp/. ProblemType: Bug DistroRelease: Ubuntu 14.04 Package: linux-image-4.13.0-37-generic 4.13.0-37.42 ProcVersionSignature: User Name 4.13.0-37.42~16.04.1-generic 4.13.13 Uname: Linux 4.13.0-37-generic x86_64 AlsaDevices: total 0 crw-rw 1 root audio 116, 1 Mar 27 18:26 seq crw-rw 1 root audio 116, 33 Mar 27 18:26 timer AplayDevices: Error: [Errno 2] No such file or directory: 'aplay' ApportVersion: 2.14.1-0ubuntu3.27 Architecture: amd64 ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord' AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: CurrentDmesg: [ 6280.259121] perf: interrupt took too long (2505 > 2500), lowering kernel.perf_event_max_sample_rate to 79750 [10463.378558] perf: interrupt took too long (3133 > 3131), lowering kernel.perf_event_max_sample_rate to 63750 [32314.949747] perf: interrupt took too long (4000 > 3916), lowering kernel.perf_event_max_sample_rate to 5 [129804.100274] clocksource: Switched to clocksource hpet [132747.312089] perf: interrupt took too long (5004 > 5000), lowering kernel.perf_event_max_sample_rate to 39750 Date: Thu Mar 29 07:45:22 2018 IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig' Lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 0bda:0329 Realtek Semiconductor Corp. Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 002: ID 0424:2660 Standard Microsystems Corp. Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub MachineType: HPE ProLiant DL380 Gen10 PciMultimedia: ProcFB: 0 mgadrmfb ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.13.0-37-generic root=UUID=b33fdcbd-a949-41a0-86d2-03d0c6808284 ro console=tty0 console=ttyS0,115200 RelatedPackageVersions: linux-restricted-modules-4.13.0-37-generic N/A linux-backports-modules-4.13.0-37-generic N/A linux-firmware 1.127.24 RfKill: Error: [Errno 2] No such file or directory: 'rfkill' SourcePackage: linux UpgradeStatus: No upgrade log present (probably fresh install) WifiSyslog: dmi.bios.date: 02/15/2018 dmi.bios.vendor: HPE dmi.bios.version: U30 dmi.board.name: ProLiant DL380 Gen10 dmi.board.vendor: HPE dmi.chassis.type: 23 dmi.chassis.vendor: HPE dmi.modalias: dmi:bvnHPE:bvrU30:bd02/15/2018:svnHPE:pnProLiantDL380Gen10:pvr:rvnHPE:rnProLiantDL380Gen10:rvr:cvnHPE:ct23:cvr: dmi.product.family: ProLiant dmi.product.name: ProLiant DL380 Gen10 dmi.sys.vendor: HPE To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759787/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1757277] Re: soft lockup from bcache leading to high load and lockup on trusty
Joseph, I'm currently testing a 4.15.0-13 kernel from xenial-16.04-edge path on these hosts. I just had the issue exhibit before the kernel change, so we should know within a couple days if that helps. Unfortunately, the logs for this system beyond those shared are not available publicly. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1757277 Title: soft lockup from bcache leading to high load and lockup on trusty Status in linux package in Ubuntu: In Progress Status in linux source package in Trusty: In Progress Bug description: I have an environment with Dell R630 servers with RAID controllers with two virtual disks and 22 passthru devices. 2 SAS SSDs and 20 HDDs are setup in 2 bcache cachesets with a resulting 20 mounted xfs filesystems running bcache backending an 11 node swift cluster (one zone has 1 fewer nodes). Two of the zones have these nodes as described above and they appear to be exibiting soft lockups in the bcache thread of the kernel causing other kernel threads to go into i/o blocking state an keeping processes on any bcache from being successful. disk access to the virtual disks mounted with out bcache is still possible when this lockup occurs. https://pastebin.ubuntu.com/p/mtn47QqBJ3/ There are several softlockup messages found in the dmesg and many of the dumpstack are locked inside the bch_writeback_thread(); static int bch_writeback_thread(void *arg) { [...] while (!kthread_should_stop()) { down_write(&dc->writeback_lock); [...] } One coredump is found when the kswapd is doing the reclaim about the xfs inode cache. __xfs_iflock( struct xfs_inode *ip) { do { prepare_to_wait_exclusive(wq, &wait.wait, TASK_UNINTERRUPTIBLE); if (xfs_isiflocked(ip)) io_schedule(); } while (!xfs_iflock_nowait(ip)); - Possible fix commits: 1). 9baf30972b55 bcache: fix for gc and write-back race https://www.spinics.net/lists/linux-bcache/msg04713.html - Related discussions: 1). Re: [PATCH] md/bcache: Fix a deadlock while calculating writeback rate https://www.spinics.net/lists/linux-bcache/msg04617.html 2). Re: hang during suspend to RAM when bcache cache device is attached https://www.spinics.net/lists/linux-bcache/msg04636.html We are running trusty/mitaka swift storage on these nodes with 4.4.0-111 kernel (linux-image-generic-lts-xenial). To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1757277/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1757277] [NEW] soft lockup from bcache leading to high load and lockup on trusty
Public bug reported: I have an environment with Dell R630 servers with RAID controllers with two virtual disks and 22 passthru devices. 2 SAS SSDs and 20 HDDs are setup in 2 bcache cachesets with a resulting 20 mounted xfs filesystems running bcache backending an 11 node swift cluster (one zone has 1 fewer nodes). Two of the zones have these nodes as described above and they appear to be exibiting soft lockups in the bcache thread of the kernel causing other kernel threads to go into i/o blocking state an keeping processes on any bcache from being successful. disk access to the virtual disks mounted with out bcache is still possible when this lockup occurs. https://pastebin.ubuntu.com/p/mtn47QqBJ3/ There are several softlockup messages found in the dmesg and many of the dumpstack are locked inside the bch_writeback_thread(); static int bch_writeback_thread(void *arg) { [...] while (!kthread_should_stop()) { down_write(&dc->writeback_lock); [...] } One coredump is found when the kswapd is doing the reclaim about the xfs inode cache. __xfs_iflock( struct xfs_inode *ip) { do { prepare_to_wait_exclusive(wq, &wait.wait, TASK_UNINTERRUPTIBLE); if (xfs_isiflocked(ip)) io_schedule(); } while (!xfs_iflock_nowait(ip)); - Possible fix commits: 1). 9baf30972b55 bcache: fix for gc and write-back race https://www.spinics.net/lists/linux-bcache/msg04713.html - Related discussions: 1). Re: [PATCH] md/bcache: Fix a deadlock while calculating writeback rate https://www.spinics.net/lists/linux-bcache/msg04617.html 2). Re: hang during suspend to RAM when bcache cache device is attached https://www.spinics.net/lists/linux-bcache/msg04636.html We are running trusty/mitaka swift storage on these nodes with 4.4.0-111 kernel (linux-image-generic-lts-xenial). ** Affects: linux (Ubuntu) Importance: Undecided Status: Incomplete ** Tags: canonical-bootstack trusty -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1757277 Title: soft lockup from bcache leading to high load and lockup on trusty Status in linux package in Ubuntu: Incomplete Bug description: I have an environment with Dell R630 servers with RAID controllers with two virtual disks and 22 passthru devices. 2 SAS SSDs and 20 HDDs are setup in 2 bcache cachesets with a resulting 20 mounted xfs filesystems running bcache backending an 11 node swift cluster (one zone has 1 fewer nodes). Two of the zones have these nodes as described above and they appear to be exibiting soft lockups in the bcache thread of the kernel causing other kernel threads to go into i/o blocking state an keeping processes on any bcache from being successful. disk access to the virtual disks mounted with out bcache is still possible when this lockup occurs. https://pastebin.ubuntu.com/p/mtn47QqBJ3/ There are several softlockup messages found in the dmesg and many of the dumpstack are locked inside the bch_writeback_thread(); static int bch_writeback_thread(void *arg) { [...] while (!kthread_should_stop()) { down_write(&dc->writeback_lock); [...] } One coredump is found when the kswapd is doing the reclaim about the xfs inode cache. __xfs_iflock( struct xfs_inode *ip) { do { prepare_to_wait_exclusive(wq, &wait.wait, TASK_UNINTERRUPTIBLE); if (xfs_isiflocked(ip)) io_schedule(); } while (!xfs_iflock_nowait(ip)); - Possible fix commits: 1). 9baf30972b55 bcache: fix for gc and write-back race https://www.spinics.net/lists/linux-bcache/msg04713.html - Related discussions: 1). Re: [PATCH] md/bcache: Fix a deadlock while calculating writeback rate https://www.spinics.net/lists/linux-bcache/msg04617.html 2). Re: hang during suspend to RAM when bcache cache device is attached https://www.spinics.net/lists/linux-bcache/msg04636.html We are running trusty/mitaka swift storage on these nodes with 4.4.0-111 kernel (linux-image-generic-lts-xenial). To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1757277/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp