[Bug 1488426] Re: High CPU usage of kworker/ksoftirqd

2019-02-10 Thread JuanJo Ciarlante
FYI still happening to me on 18.04 with HWE kernel, similar behavior as #38: kworker with steady high cpu usage after un-docking, re-docking didn't solve it tho. kernel: 4.18.0-15-generic hardware: Thinkpad x270, Thinkpad Ultra Dock, network: enp0s31f6 (dock eth) and wlp3s0 up -- You received

[Bug 1670959] Re: systemd-resolved using 100% CPU

2019-01-30 Thread JuanJo Ciarlante
For other souls facing this "Medium" issue, a hammer-ish workaround that works for me: 1) Run: apt-get install cpulimit 2) edit /lib/systemd/system/systemd-resolved.service: 2a) Comment out: #Type=notify 2b) Replace line (may want to remove the -k to let cpulimit throttle it):

[Bug 1670959] Re: systemd-resolved using 100% CPU

2019-01-23 Thread JuanJo Ciarlante
Also happening to me after 16.04 -> 18.04 LTS upgrade (via do-release- upgrade) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1670959 Title: systemd-resolved using 100% CPU To manage notifications

[Bug 1742602] Re: Blank screen when starting X after upgrading from 4.10 to 4.13.0-26

2018-02-19 Thread JuanJo Ciarlante
FYI this is also happening for me, LTS 16.04.3 + HWE (kernel and xorg pkgs), Thinkpad x270 w/ Integrated Graphics Chipset: Intel(R) HD Graphics 620. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2017-06-07 Thread JuanJo Ciarlante
Indeed that had been the case, thx for replying. ** Changed in: iproute2 (Ubuntu) Status: Confirmed => Invalid ** Changed in: linux (Ubuntu) Status: Incomplete => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1695929] [NEW] vlan on top of bond interfaces assumes bond[0-9]+ naming

2017-06-05 Thread JuanJo Ciarlante
Public bug reported: Context: deploying nodes via maas with bonds and vlans on top of them, in particular for the example below: bong-stg and bond-stg.600 ~# dpkg-query -W vlan vlan1.9-3.2ubuntu1.16.04.1 * /etc/network/interfaces excerpt as setup by Maas (obfuscated for ip and mac

[Bug 1679823] Re: bond0: Invalid MTU 9000 requested, hw max 1500 with kernel 4.8 / 4.10 in XENIAL LTS

2017-05-12 Thread JuanJo Ciarlante
We really need to get xenial added for its HWE kernels: we have several BootStacks running with them, mainly for latest needed drivers while keeping LTS (mellanox for VNFs as an example)- all these are now obviously at risk on the next reboot. Note also that recovering from this issue does

[Bug 1602057] Re: [SRU] (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

2017-04-07 Thread JuanJo Ciarlante
FYI we're also hitting this on trusty/mitaka for what looks like incompletely deleted instances: * still running at hypervisor, ie virsh dominfo UUID # shows it ok * deleted both at nova 'instances' and 'block_device_mapping' tables. Once certain it's still running at hypervisor, our

[Bug 1668123] Re: lxc fails to start with cgroup error

2017-03-09 Thread JuanJo Ciarlante
FYI because of other maintenance I had to do on the affected nodes, after upgrading to linux-generic-lts-xenial 4.4.0-66-generic this issue didn't show anymore. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1582394] Re: [16.04, lxc] Failed to reset devices.list on ...

2017-02-08 Thread JuanJo Ciarlante
I can't make it work even after manually installing squashfuse (FYI lxc created by juju deploy cs:ubuntu --to lxc:1 ) root@juju-machine-1-lxc-14:~# uname -a Linux juju-machine-1-lxc-14 4.8.0-34-generic #36~16.04.1-Ubuntu SMP Wed Dec 21 18:55:08 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

[Bug 1638700] Re: hio: SSD data corruption under stress test

2016-11-21 Thread JuanJo Ciarlante
FTR/FYI (as per chatter w/kamal) we're waiting for >= 4.8.0-28 to be available at https://launchpad.net/ubuntu/+source/linux-hwe-edge -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1638700 Title:

[Bug 1582278] Re: [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from one NUMA node and PCI device from another NUMA node.

2016-11-17 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1582278 Title: [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from one NUMA node and PCI device

[Bug 1635223] [NEW] please include mlx5_core modules in linux-image-generic package

2016-10-20 Thread JuanJo Ciarlante
Public bug reported: Because linux-image-generic pkg doesn't include mlx5_core, stock ubuntu cloud-images can't be used by VM guests using mellanox VFs, forcing the creation of an ad-hoc cloud image with added linux-image-extra-virtual ** Affects: cloud-images Importance: Undecided

[Bug 1601986] Re: RuntimeError: osrandom engine already registered

2016-07-12 Thread JuanJo Ciarlante
** Also affects: python-cryptography (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1601986 Title: RuntimeError: osrandom engine already

[Bug 1596866] Re: NMI watchdog: Watchdog detected hard LOCKUP on cpu 0 - Xenial - Python

2016-06-29 Thread JuanJo Ciarlante
Some ~recent alike finding, in case it helps: https://github.com/TobleMiner/wintron7.0/issues/2 - worked around with clocksource=tsc, guess that ntpq should also show a large drift. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1570657] Re: Bash completion needed for versioned juju commands

2016-06-20 Thread JuanJo Ciarlante
See https://bugs.launchpad.net/juju-core/+bug/1588403/comments/5 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1570657 Title: Bash completion needed for versioned juju commands To manage

[Bug 1588403] Re: Tab completion missing in Juju 2.0 betas

2016-06-20 Thread JuanJo Ciarlante
See https://github.com/juju/juju/pull/5057 for *beta9* and above (ie "applications" instead of "services") - can try it with: sudo -i # become root rm /etc/bash_completion.d/juju2 wget -O /etc/bash_completion.d/juju-2.0 \

[Bug 1570657] Re: Bash completion needed for versioned juju commands

2016-06-20 Thread JuanJo Ciarlante
See updated https://github.com/juju/juju/pull/5057, in short it adds below two files to /etc/bash_completion.d/ (sort == load order) -> juju-2.0 (added) juju-core(existing from juju1) juju-version (added) , so that: * juju-2.0: completion for `juju-2.0` , but also plain `juju` (ie

[Bug 1567895] Re: openstack volume create does not support --snapshot

2016-06-05 Thread JuanJo Ciarlante
** Also affects: python-openstackclient (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1567895 Title: openstack volume create does not support

[Bug 1518430] Re: liberty: ~busy loop on epoll_wait being called with zero timeout

2016-04-25 Thread JuanJo Ciarlante
For experimenting purposes / measuring the difference against what would be a (better) behaving epoll_wait usage, I created: https://github.com/jjo/dl-hack-lp1518430 , which implements hooking epoll_wait() and select() (via LD_PRELOAD) to limit the rate of calls with zero timeouts. WfM'd on an

[Bug 1543046] Re: thermald spamming kernel log when updating powercap RAPL powerlimit

2016-04-11 Thread JuanJo Ciarlante
FYI I'm running trusty with linux-generic-lts-xenial 4.4.0.18.10, was getting same thermald spamming until I manually upgraded to the package as per comment#8 above: - upgrade thermald:amd64 1.4.3-5~14.04.2 1.4.3-5~14.04.3 , then no more kern.log spamming, thanks!. -- You received this bug

[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-03-02 Thread JuanJo Ciarlante
Chris: confirming this bug most likely fixed indeed by 1:2014.1.5-0ubuntu3, as there has been no further alerts from missing tun_ids since it got installed 1 week ago (recall we had been getting several of those per week). Thanks! :) --J -- You received this bug notification because you are a

[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-03-02 Thread JuanJo Ciarlante
Chris: confirming this bug most likely fixed indeed by 1:2014.1.5-0ubuntu3, as there has been no further alerts from missing tun_ids since it got installed 1 week ago (recall we had been getting several of those per week). Thanks! :) --J -- You received this bug notification because you are a

[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-02-24 Thread JuanJo Ciarlante
Thanks Chris for the updates - FYI we've upgraded all of our compute nodes 1:2014.1.5-0ubuntu3 from proposed, no (extra)issues so far after some hours, FYI this stack has ~30 nodes, ~1k+ active instances. We expect this change to (obviously) stop those KeyError messages at log, and likely also

[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-02-24 Thread JuanJo Ciarlante
Thanks Chris for the updates - FYI we've upgraded all of our compute nodes 1:2014.1.5-0ubuntu3 from proposed, no (extra)issues so far after some hours, FYI this stack has ~30 nodes, ~1k+ active instances. We expect this change to (obviously) stop those KeyError messages at log, and likely also

[Bug 1531963] [NEW] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-01-07 Thread JuanJo Ciarlante
Public bug reported: Filing this on ubuntu/neutron package, as neutron itself is EOL'd for Icehouse. FYI this is a nonHA icehouse/trusty deploy using serverteam's juju charms. On one of our production environments with a rather high rate of API calls, (sp for transient VMs from CI), we

[Bug 1460164] Re: upgrade of openvswitch-switch can sometimes break neutron-plugin-openvswitch-agent

2015-12-17 Thread JuanJo Ciarlante
FYI today's openvswitch-switch upgrade triggered a cluster-wide outage on one (or more) of our production openstacks. ** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1460164] Re: upgrade of openvswitch-switch can sometimes break neutron-plugin-openvswitch-agent

2015-12-17 Thread JuanJo Ciarlante
FYI today's openvswitch-switch upgrade triggered a cluster-wide outage on one (or more) of our production openstacks. ** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu.

[Bug 1521279] Re: check_haproxy.sh is broken for openstack liberty's haproxy 1.5.14

2015-11-30 Thread JuanJo Ciarlante
** Also affects: keystone (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: cinder (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: glance (Ubuntu) Importance: Undecided Status: New ** No longer affects: glance

[Bug 1521279] Re: check_haproxy.sh is broken for openstack liberty's haproxy 1.5.14

2015-11-30 Thread JuanJo Ciarlante
** Also affects: keystone (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: cinder (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: glance (Ubuntu) Importance: Undecided Status: New ** No longer affects: glance

[Bug 1518430] [NEW] liberty: ~busy loop on epoll_wait being called with zero timeout

2015-11-20 Thread JuanJo Ciarlante
Public bug reported: Context: openstack juju/maas deploy using 1510 charms release on trusty, with:   openstack-origin: "cloud:trusty-liberty"   source: "cloud:trusty-updates/liberty * Several openstack nova- and neutron- services, at least: nova-compute, neutron-server, nova-conductor,

[Bug 1518430] [NEW] liberty: ~busy loop on epoll_wait being called with zero timeout

2015-11-20 Thread JuanJo Ciarlante
Public bug reported: Context: openstack juju/maas deploy using 1510 charms release on trusty, with:   openstack-origin: "cloud:trusty-liberty"   source: "cloud:trusty-updates/liberty * Several openstack nova- and neutron- services, at least: nova-compute, neutron-server, nova-conductor,

[Bug 1511495] [NEW] lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
Public bug reported: lxc packages: *** 1.0.7-0ubuntu0.9 0 500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64 Packages lxc apparmor profiles loading fails with: root@host:~# apparmor_parser -r /etc/apparmor.d/lxc/lxc-default-with-mounting Found reference to variable PROC,

[Bug 1511495] [NEW] lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
Public bug reported: lxc packages: *** 1.0.7-0ubuntu0.9 0 500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64 Packages lxc apparmor profiles loading fails with: root@host:~# apparmor_parser -r /etc/apparmor.d/lxc/lxc-default-with-mounting Found reference to variable PROC,

[Bug 1511495] Re: lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
** Changed in: lxc (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to lxc in Ubuntu. https://bugs.launchpad.net/bugs/1511495 Title: lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles To manage

[Bug 1511495] Re: lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
** Changed in: lxc (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1511495 Title: lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles To manage notifications

[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-30 Thread JuanJo Ciarlante
w000T! \o/ using @jsalisbury kernel from comment#7 3.19.0-30-generic #33~lp1497812 , I can't reproduce the failing behavior under same host + setup - no mirrored frames or alike dmesg - containers networking ok Comparison between stock vivid 3.19.0-30-generic #33~14.04.1-Ubuntu and above: -

[Bug 1500981] Re: juju-db segfault while syncing with replicas

2015-09-30 Thread JuanJo Ciarlante
@gz: I got this at our staging environment, where we re-deploy HA'd juju + openstacks several times a week (or day), 1st time I positively observe this behavior, so I'd guess it's unfortunately a subtle race condition or alike. I did save /var/lib/juju/db/, /var/log/syslog and

[Bug 1500981] Re: juju-db segfault while syncing with replicas

2015-09-30 Thread JuanJo Ciarlante
@gz: I got this at our staging environment, where we re-deploy HA'd juju + openstacks several times a week (or day), 1st time I positively observe this behavior, so I'd guess it's unfortunately a subtle race condition or alike. I did save /var/lib/juju/db/, /var/log/syslog and

[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-29 Thread JuanJo Ciarlante
Confirming _not_ observing reported issue on an equivalent setup w/ LXCs frames hitting phy interfaces ( bridged towards br0 -> bond0 -> {eth3, eth4} ): * linux 4.2.0-12-generic #14~14.04.1-Ubuntu (from canonical-kernel-team/ppa) * i40e version 1.3.4-k # ethtool -i eth3 driver: i40e version:

[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-21 Thread JuanJo Ciarlante
FYI we found these issues while deploying openstack via juju/maas over a pool of 8 nodes having 4x i40e NICs, where we also found linux-hwe-generic-trusty (lts-utopic) to be unreliable from its old i40e driver (0.4.10-k). Below is a summary of our i40e findings using lts-vivid and lts-utopic re:

[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-21 Thread JuanJo Ciarlante
ERRATA on comment #2 : OK i40e driver version is 1.2.48, as per original report URL. Comment #2 table is actually: #1 3.19.0-28-generic w/stock 1.2.2-k: non-phy mirrored frames (this bug) #2 3.16.0-49-generic w/stock 0.4.10-k: unreliable deploys #3 3.19.0-28-generic w/built 1.2.48: OK (*) #4

[Bug 1497812] [NEW] i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-20 Thread JuanJo Ciarlante
Public bug reported: Using 3.19.0-28-generic #30~14.04.1-Ubuntu with stock i40e driver version 2.2.2-k makes every 'non physical' MAC output frame appear as copied back at input, as if the switch was doing frame 'mirroring' (and/or hair-pinning). FYI same setup, with i40e upgraded to 1.2.48 from

[Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-04 Thread JuanJo Ciarlante
Thanks for the quick turnaround, could you please backport the fix to 1507 trunk ? We have several stacks where we need to manually apply above workaround for corosync/pacemaker to behave properly, and several coming down the line before 1510. FYI I while fixing hacluster trunk (essentially came

[Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-04 Thread JuanJo Ciarlante
Thanks for the quick turnaround, could you please backport the fix to 1507 trunk ? We have several stacks where we need to manually apply above workaround for corosync/pacemaker to behave properly, and several coming down the line before 1510. FYI I while fixing hacluster trunk (essentially came

[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
After trying several corosync/pacemaker restarts without luck, I was able to workaround this by adding an 'uidgid' entry for hacluster:haclient: * from /var/log/syslog: Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]: [MAIN ] Denied connection attempt from 108:113 $ getent passwd 108

[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker on cinder, glance services deployed with juju: $ juju run --service=cinder,glance "service corosync restart; service pacemaker restart" , which broke pacemaker start on all of them, with same "Invalid IPC

[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
After trying several corosync/pacemaker restarts without luck, I was able to workaround this by adding an 'uidgid' entry for hacluster:haclient: * from /var/log/syslog: Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]: [MAIN ] Denied connection attempt from 108:113 $ getent passwd 108

[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker on cinder, glance services deployed with juju: $ juju run --service=cinder,glance "service corosync restart; service pacemaker restart" , which broke pacemaker start on all of them, with same "Invalid IPC

[Bug 1487190] Re: nicstat fails on more than ~30 interfaces

2015-08-20 Thread JuanJo Ciarlante
fixed by https://github.com/jjo/nicstat/commit/3c2407da66c2fd2914e7f362f41f729cc21ff1e4, see strace comparison (stock vs compiled with above) at a host with ~270 interfaces: http://paste.ubuntu.com/12137566/ -- You received this bug notification because you are a member of Ubuntu Bugs, which is

[Bug 1487190] [NEW] nicstat fails on more than ~30 interfaces

2015-08-20 Thread JuanJo Ciarlante
Public bug reported: nicstat falsely assumes that a single read from /proc/net/dev will return all its content (even if using a ~large buffer, 128K) ** Affects: nicstat (Ubuntu) Importance: Undecided Status: New ** Tags: canonical-bootstack -- You received this bug notification

[Bug 1350947] Re: apparmor: no working rule to allow making a mount private

2015-08-19 Thread JuanJo Ciarlante
FYI I'm able to successfully drive netns inside LXC, manually then also via openstack neutron-gateways, via this crafted aa profile: /etc/apparmor.d/lxc/lxc-default-with-netns - https://gist.github.com/jjo/ff32b08e48e4a52bfc36 -- You received this bug notification because you are a member of

[Bug 1350947] Re: apparmor: no working rule to allow making a mount private

2015-08-19 Thread JuanJo Ciarlante
FYI I'm able to successfully drive netns inside LXC, manually then also via openstack neutron-gateways, via this crafted aa profile: /etc/apparmor.d/lxc/lxc-default-with-netns - https://gist.github.com/jjo/ff32b08e48e4a52bfc36 -- You received this bug notification because you are a member of

[Bug 1476428] [NEW] haproxy service handling failing to stop old instances

2015-07-20 Thread JuanJo Ciarlante
Public bug reported: On an openstack HA kilo deployment using charms trunks, several services failing to properly restart haproxy, leaving old instances running, showing cinder/0 as example: $ juju ssh cinder/0 'pgrep -f haproxy | xargs ps -o pid,ppid,lstart,cmd -p; egrep St.*ing.haproxy

[Bug 1476428] [NEW] haproxy service handling failing to stop old instances

2015-07-20 Thread JuanJo Ciarlante
Public bug reported: On an openstack HA kilo deployment using charms trunks, several services failing to properly restart haproxy, leaving old instances running, showing cinder/0 as example: $ juju ssh cinder/0 'pgrep -f haproxy | xargs ps -o pid,ppid,lstart,cmd -p; egrep St.*ing.haproxy

[Bug 1356392] Re: lacks sw raid1 install support

2015-07-09 Thread JuanJo Ciarlante
With maas 1.9 deprecating d-i, what option is left for maas swraid installs ? Please consider re-prioritizing. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1356392 Title: lacks sw raid1 install

[Bug 1356392] Re: lacks sw raid1 install support

2015-07-09 Thread JuanJo Ciarlante
With maas 1.9 deprecating d-i, what option is left for maas swraid installs ? Please consider re-prioritizing. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to curtin in Ubuntu. https://bugs.launchpad.net/bugs/1356392 Title: lacks sw

[Bug 1462466] [NEW] bcache-tools udev rules on trusty lacking util-linux 2.24+ workaround

2015-06-05 Thread JuanJo Ciarlante
Public bug reported: We're using bcache under trusty HWE kernel (3.16.0-38-generic) with bcache-tools 1.0.7-0ubuntu1 (built from src). As trusty has util-linux 2.20.1, udev rules for auto registering bcache devices are skipped: # blkid was run by the standard udev rules # It recognised

[Bug 1462466] [NEW] bcache-tools udev rules on trusty lacking util-linux 2.24+ workaround

2015-06-05 Thread JuanJo Ciarlante
Public bug reported: We're using bcache under trusty HWE kernel (3.16.0-38-generic) with bcache-tools 1.0.7-0ubuntu1 (built from src). As trusty has util-linux 2.20.1, udev rules for auto registering bcache devices are skipped: # blkid was run by the standard udev rules # It recognised

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-20 Thread JuanJo Ciarlante
** Changed in: linux (Ubuntu) Status: Incomplete = Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all zero after upgrade to Trusty To

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-20 Thread JuanJo Ciarlante
** Changed in: linux (Ubuntu) Status: Incomplete = Confirmed -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to iproute2 in Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all zero after

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
With kernels from http://kernel.ubuntu.com/~kernel-ppa/mainline/, I've narrowed down to: * OK: tc-class-stats.3.10.76-031076-generic.txt: rate 1600bit 2pps backlog 0b 0p requeues 0 * BAD: tc-class-stats.3.11.0-031100rc1-generic.txt: rate 0bit 0pps backlog 0b 0p requeues 0 * BAD:

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
With kernels from http://kernel.ubuntu.com/~kernel-ppa/mainline/, I've narrowed down to: * OK: tc-class-stats.3.10.76-031076-generic.txt: rate 1600bit 2pps backlog 0b 0p requeues 0 * BAD: tc-class-stats.3.11.0-031100rc1-generic.txt: rate 0bit 0pps backlog 0b 0p requeues 0 * BAD:

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
As per comment #13, I've added the following tags: * kernel-fixed-upstream-3.10 * kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1 kernel-bug-exists-upstream-4.1-rc1 Please correct them if I misunderstood the naming convention, FYI my narrowed bisect corresponds to: *** OK ***:

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
As per comment #13, I've added the following tags: * kernel-fixed-upstream-3.10 * kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1 kernel-bug-exists-upstream-4.1-rc1 Please correct them if I misunderstood the naming convention, FYI my narrowed bisect corresponds to: *** OK ***:

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
FYI peeking at patch-3.11-rc1, shows [...] - struct gnet_stats_rate_est tcfc_rate_est; + struct gnet_stats_rate_est64tcfc_rate_est; with its correspondent addition: + * struct gnet_stats_rate_est64 - rate estimator + * @bps: current byte rate + * @pps: current packet rate +

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
FYI peeking at patch-3.11-rc1, shows [...] - struct gnet_stats_rate_est tcfc_rate_est; + struct gnet_stats_rate_est64tcfc_rate_est; with its correspondent addition: + * struct gnet_stats_rate_est64 - rate estimator + * @bps: current byte rate + * @pps: current packet rate +

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
@peanlvch: FYI as per comment #4 I already tested v4.1-rc1-vivid, same bad results. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all zero after upgrade

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
@peanlvch: FYI as per comment #4 I already tested v4.1-rc1-vivid, same bad results. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to iproute2 in Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI tried iproute2-3.19.0, same zero rate output. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all zero after upgrade to Trusty To manage

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI there are several changes at https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.10.12 that refer to htb rate handling. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to iproute2 in Ubuntu.

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI this has been reported to debian also (kernel 3.16): https://lists.debian.org/debian-kernel/2014/11/msg00288.html -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI this has been reported to debian also (kernel 3.16): https://lists.debian.org/debian-kernel/2014/11/msg00288.html -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to iproute2 in Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title:

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI linux-image-4.1.0-040100rc1-generic_4.1.0-040100rc1.201504270235_i386.deb (from ~kernel-ppa) failed the same way. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to iproute2 in Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title:

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI linux-image-4.1.0-040100rc1-generic_4.1.0-040100rc1.201504270235_i386.deb (from ~kernel-ppa) failed the same way. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
By installing different kernel versions (trusty, manual download and dpkg -i), I narrowed this down to: - linux-image-3.8.0-44-generic: OK - linux-image-3.11.0-26-generic: BAD (zero rate counters). FYI I used this script: # cat htb.sh /sbin/tc qdisc del dev eth0 root /sbin/tc qdisc add dev eth0

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
By installing different kernel versions (trusty, manual download and dpkg -i), I narrowed this down to: - linux-image-3.8.0-44-generic: OK - linux-image-3.11.0-26-generic: BAD (zero rate counters). FYI I used this script: # cat htb.sh /sbin/tc qdisc del dev eth0 root /sbin/tc qdisc add dev eth0

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI tried iproute2-3.19.0, same zero rate output. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to iproute2 in Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all zero after upgrade to Trusty To

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI there are several changes at https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.10.12 that refer to htb rate handling. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
** Also affects: linux-meta (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to iproute2 in Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
** Also affects: linux-meta (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1426589 Title: tc class statistics rates are all zero after upgrade to

[Bug 1379567] Re: maas-proxy is an open proxy with no ACLs and listening on all interfaces

2015-02-11 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1379567 Title: maas-proxy is an open proxy with no ACLs and listening on all interfaces To manage

[Bug 1379567] Re: maas-proxy is an open proxy with no ACLs and listening on all interfaces

2015-02-11 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1379567 Title: maas-proxy is an open proxy with no ACLs and listening on all interfaces To manage notifications

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2015-01-30 Thread JuanJo Ciarlante
@sinzui: closing this as invalid, as I later confirmed this to be a MTU issue. ** Changed in: juju-core Status: Triaged = Invalid ** Changed in: juju-core (Ubuntu) Status: New = Invalid -- You received this bug notification because you are a member of Ubuntu Server Team, which is

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2015-01-30 Thread JuanJo Ciarlante
@sinzui: closing this as invalid, as I later confirmed this to be a MTU issue. ** Changed in: juju-core Status: Triaged = Invalid ** Changed in: juju-core (Ubuntu) Status: New = Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is

[Bug 1355813] Re: Interface MTU management across MAAS/juju

2015-01-05 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1355813 Title: Interface MTU management across MAAS/juju To manage notifications about this bug go to:

[Bug 1355813] Re: Interface MTU management across MAAS/juju

2015-01-05 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1355813 Title: Interface MTU management across MAAS/juju To manage notifications about this bug

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-19 Thread JuanJo Ciarlante
This deployment has 2 metal nodes hosting LXC units (machine: 0, 18), then 'juju deploy cs:ubuntu --to lxc:0' does ok, while '--to lxc:18' was consistently failing as described above. FYI I've worked around this by removing machine 18 down to 'maas ready' and reacquiring it from juju, now all new

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-19 Thread JuanJo Ciarlante
This deployment has 2 metal nodes hosting LXC units (machine: 0, 18), then 'juju deploy cs:ubuntu --to lxc:0' does ok, while '--to lxc:18' was consistently failing as described above. FYI I've worked around this by removing machine 18 down to 'maas ready' and reacquiring it from juju, now all new

[Bug 1393444] [NEW] machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
Public bug reported: FYI this is the same environment from lp#1392810 (1.18-1.19-1.20), juju version: 1.20.11-trusty-amd64 New units deployed (to LXC over maas) stay at agent-state: pending: http://paste.ubuntu.com/9057045/ #1 TCP connects ok to node0:17070 - at the unit:

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
/var/log/juju/machine-18-lxc-5.log: http://paste.ubuntu.com/9057287/ NOTE there the repeated log stanzas are because of my manual restarts. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu.

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
strace at both sides (grepped for specific sockets): http://paste.ubuntu.com/9057691/, mind the subsecond date diff. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1393444 Title:

[Bug 1393444] [NEW] machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
Public bug reported: FYI this is the same environment from lp#1392810 (1.18-1.19-1.20), juju version: 1.20.11-trusty-amd64 New units deployed (to LXC over maas) stay at agent-state: pending: http://paste.ubuntu.com/9057045/ #1 TCP connects ok to node0:17070 - at the unit:

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
/var/log/juju/machine-18-lxc-5.log: http://paste.ubuntu.com/9057287/ NOTE there the repeated log stanzas are because of my manual restarts. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1393444

[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
strace at both sides (grepped for specific sockets): http://paste.ubuntu.com/9057691/, mind the subsecond date diff. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1393444 Title: machine unit

[Bug 1377964] Re: maas-proxy logrotate permission denied

2014-11-06 Thread JuanJo Ciarlante
Also affected by this issue: 1.7rc1 (upgraded from 1.5.2) at /var/log/syslog: Nov 6 06:25:01 duck squid3: Cannot open stdio:/var/log/maas/proxy/store.log: (13) Permission denied -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas

[Bug 1377964] Re: maas-proxy logrotate permission denied

2014-11-06 Thread JuanJo Ciarlante
Also affected by this issue: 1.7rc1 (upgraded from 1.5.2) at /var/log/syslog: Nov 6 06:25:01 duck squid3: Cannot open stdio:/var/log/maas/proxy/store.log: (13) Permission denied -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1382190] Re: LXCs assigned IPs by MAAS DHCP lack DNS PTR entries

2014-10-17 Thread JuanJo Ciarlante
To clarify what's happening with the rabbitmq charm: for its units to be able cluster together, they need to refer to each other by hostname, see [0] which was done based on the observed pattern as per #4,#7 comments above. [0] https://code.launchpad.net/~jjo/charms/trusty/rabbitmq-server/fix-

[Bug 1382190] Re: LXCs assigned IPs by MAAS DHCP lack DNS PTR entries

2014-10-17 Thread JuanJo Ciarlante
To clarify what's happening with the rabbitmq charm: for its units to be able cluster together, they need to refer to each other by hostname, see [0] which was done based on the observed pattern as per #4,#7 comments above. [0] https://code.launchpad.net/~jjo/charms/trusty/rabbitmq-server/fix-

[Bug 1274947] Re: juju lxc instances deployed via MAAS don't have resolvable hostnames

2014-10-16 Thread JuanJo Ciarlante
FYI this is voiding current trusty/rabbitmq charm from deploying on MaaS 1.7beta + LXC, 1.5 had at least PTR resolution for every dhcp'd IP as e.g: IN PTR 10-1-57-22.maas. , while 1.7beta has none afaicT. -- You received this bug notification because you are a member of Ubuntu Server Team,

  1   2   >