[Bug 1411163] Re: No fdb entries added when failover dhcp and l3 agent together
To whom will rerun the autopkgtest failure, below is the step: 1. sudo apt-get install autopkgtest qemu-system qemu-utils genisoimage 2. sudo adt-buildvm-ubuntu-cloud -v -r trusty 3. mkdir /tmp/neutron 4. sudo adt-run neutron -U --apt-pocket=proposed --- qemu adt-trusty-amd64-cloud.img -d -o /tmp/neutron/ Ref: http://packaging.ubuntu.com/html/auto-pkg-test.html#executing-the- test -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1411163 Title: No fdb entries added when failover dhcp and l3 agent together To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1411163/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1411163] Re: No fdb entries added when failover dhcp and l3 agent together
Reviewing this SRU its not clear why the devel task for neutron is Invalid. The devel task for neutron already have it, since it has been fixed in neutron7.0.0 liberty, afterwards version e.g Mitaka Newton are all have already had it in the neutron upstream code. Has this been fixed in Zesty? If so in what release did the fix land? Yes, same as above, Newton in Zesty have already had it in the neutron code. Additionally, a "Regression Potential" of None is frowned upon. Is there really no regression potential. If so please explain how this is possible. I am just backporting the higher version fix from liberty to icehouse, and any new release are already having it, with such a simple fix, I really can't think out the Regression Potential. ** Changed in: neutron (Ubuntu Trusty) Status: Incomplete => Fix Committed ** Changed in: neutron (Ubuntu Trusty) Assignee: (unassigned) => Xiang Hui (xianghui) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1411163 Title: No fdb entries added when failover dhcp and l3 agent together To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1411163/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1624997] Re: live-migration fails because of " Host key verification failed"
Hi KC, I have hit the same problem, could you provide the detail way to fix it, what do you mean with qemu+tcp setup in libvirtd configuration? thanks. ** Changed in: nova (Ubuntu) Status: Triaged => Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1624997 Title: live-migration fails because of " Host key verification failed" To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1624997/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1620897] Re: Fix race on getting close notifier channel
The panic is not shown up after updating from xenial-proposed branch. ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1620897 Title: Fix race on getting close notifier channel To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/etcd/+bug/1620897/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1610784] Re: cloud-init openstack.py code does not recognize network type 'tap'
Verified with below network_data.json, no errors found out in cloud-init log. {"services": [{"type": "dns", "address": "10.0.8.1"}], "networks": [{"network_id": "bd024b7d-a246-453c-8e72-7216d9539bae", "link": "taped13e7a8-06", "type": "ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": "fa:16:3e:6a:52:32", "mtu": 1458, "type": "tap", "id": "taped13e7a8-06", "vif_id": "ed13e7a8-065f- 47a6-b068-2811abe91dd2"}]} -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1610784 Title: cloud-init openstack.py code does not recognize network type 'tap' To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1610784/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1610784] Re: cloud-init openstack.py code does not recognize network type 'tap'
@Scott Thanks for the commit, just let you know that I am setting up the environment and do the testing. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1610784 Title: cloud-init openstack.py code does not recognize network type 'tap' To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1610784/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1620897] Re: Fix race on getting close notifier channel
** Description changed: [OS] Ubuntu Xenial etcd 2.2.5 [Error Log] Sep 2 16:08:16 ubuntu etcd[18180]: panic: net/http: CloseNotify called after ServeHTTP finished Sep 2 16:08:16 ubuntu etcd[18180]: goroutine 421 [running]: Sep 2 16:08:16 ubuntu etcd[18180]: net/http.(*response).CloseNotify(0xc8202f64e0, 0x0) Sep 2 16:08:16 ubuntu etcd[18180]: #011/usr/lib/go/src/net/http/server.go:1535 +0x9d Sep 2 16:08:16 ubuntu etcd[18180]: github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP.func1(0x7fb0024d1a80, 0xc8202f64e0, 0xc8203e4f78, 0xc8202e8a80, 0xc8203e4fa0, 0xc820464f50) Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:107 +0x39 Sep 2 16:08:16 ubuntu etcd[18180]: created by github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:113 +0x691 Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Unit entered failed state. Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Failed with result 'exit-code'. etcd proxy process dies and is not restarted by systemd, Components that depend on etcd report connection errors like this: - Request to server http://127.0.0.1:4001 failed: MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=4001): Max retries exceeded with url: /v2/keys/calico/dhcp/v1/subnet?waitIndex=1625&recursive=true&wait=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] ECONNREFUSED',))",) [Root Cause] etcd proxy panic occur in etcd 2.2.5, which is fixed in etcd 2.3.4 (https://github.com/coreos/etcd/pull/5269/files#r62072134), currently only have 2.2.5 in xenial, we need it backported. --- [Impact] This patch fix race on getting close notifier channel when a panic reported as 'net/http: CloseNotify called after ServeHTTP finished'. [Test Case] + No special configuration, running etcd in proxy mode. [Regression Potential] + etcd no longer crashes every so often while in proxy mode... -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1620897 Title: Fix race on getting close notifier channel To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/etcd/+bug/1620897/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1620897] Re: Fix race on getting close notifier channel
** Description changed: [OS] Ubuntu Xenial etcd 2.2.5 - [Error Log] - Sep 2 16:08:16 ubuntu etcd[18180]: panic: net/http: CloseNotify called after ServeHTTP finished - Sep 2 16:08:16 ubuntu etcd[18180]: goroutine 421 [running]: - Sep 2 16:08:16 ubuntu etcd[18180]: net/http.(*response).CloseNotify(0xc8202f64e0, 0x0) - Sep 2 16:08:16 ubuntu etcd[18180]: #011/usr/lib/go/src/net/http/server.go:1535 +0x9d - Sep 2 16:08:16 ubuntu etcd[18180]: github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP.func1(0x7fb0024d1a80, 0xc8202f64e0, 0xc8203e4f78, 0xc8202e8a80, 0xc8203e4fa0, 0xc820464f50) - Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:107 +0x39 - Sep 2 16:08:16 ubuntu etcd[18180]: created by github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP - Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:113 +0x691 - Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT - Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Unit entered failed state. + Sep 2 16:08:16 ubuntu etcd[18180]: panic: net/http: CloseNotify called after ServeHTTP finished + Sep 2 16:08:16 ubuntu etcd[18180]: goroutine 421 [running]: + Sep 2 16:08:16 ubuntu etcd[18180]: net/http.(*response).CloseNotify(0xc8202f64e0, 0x0) + Sep 2 16:08:16 ubuntu etcd[18180]: #011/usr/lib/go/src/net/http/server.go:1535 +0x9d + Sep 2 16:08:16 ubuntu etcd[18180]: github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP.func1(0x7fb0024d1a80, 0xc8202f64e0, 0xc8203e4f78, 0xc8202e8a80, 0xc8203e4fa0, 0xc820464f50) + Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:107 +0x39 + Sep 2 16:08:16 ubuntu etcd[18180]: created by github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP + Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:113 +0x691 + Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT + Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Unit entered failed state. Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Failed with result 'exit-code'. etcd proxy process dies and is not restarted by systemd, Components that depend on etcd report connection errors like this: - Request to server http://127.0.0.1:4001 failed: MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=4001): Max retries exceeded with url: /v2/keys/calico/dhcp/v1/subnet?waitIndex=1625&recursive=true&wait=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] ECONNREFUSED',))",) - [Root Cause] etcd proxy panic occur in etcd 2.2.5, which is fixed in etcd 2.3.4 (https://github.com/coreos/etcd/pull/5269/files#r62072134), currently only have 2.2.5 in xenial, we need it backported. + + --- + + [Impact] + This patch fix race on getting close notifier channel when a panic reported as 'net/http: CloseNotify called after ServeHTTP finished'. + + [Test Case] + + [Regression Potential] -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1620897 Title: Fix race on getting close notifier channel To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/etcd/+bug/1620897/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1620897] Re: Fix race on getting close notifier channel
** Patch added: "xenial-lp1620897.debdiff" https://bugs.launchpad.net/ubuntu/+source/etcd/+bug/1620897/+attachment/4735870/+files/xenial-lp1620897.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1620897 Title: Fix race on getting close notifier channel To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/etcd/+bug/1620897/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1620897] [NEW] Fix race on getting close notifier channel
Public bug reported: [OS] Ubuntu Xenial etcd 2.2.5 [Error Log] Sep 2 16:08:16 ubuntu etcd[18180]: panic: net/http: CloseNotify called after ServeHTTP finished Sep 2 16:08:16 ubuntu etcd[18180]: goroutine 421 [running]: Sep 2 16:08:16 ubuntu etcd[18180]: net/http.(*response).CloseNotify(0xc8202f64e0, 0x0) Sep 2 16:08:16 ubuntu etcd[18180]: #011/usr/lib/go/src/net/http/server.go:1535 +0x9d Sep 2 16:08:16 ubuntu etcd[18180]: github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP.func1(0x7fb0024d1a80, 0xc8202f64e0, 0xc8203e4f78, 0xc8202e8a80, 0xc8203e4fa0, 0xc820464f50) Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:107 +0x39 Sep 2 16:08:16 ubuntu etcd[18180]: created by github.com/coreos/etcd/proxy.(*reverseProxy).ServeHTTP Sep 2 16:08:16 ubuntu etcd[18180]: #011/build/etcd-tG_CNV/etcd-2.2.5+dfsg/obj-x86_64-linux-gnu/src/github.com/coreos/etcd/proxy/reverse.go:113 +0x691 Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Unit entered failed state. Sep 2 16:08:16 ubuntu systemd[1]: etcd.service: Failed with result 'exit-code'. etcd proxy process dies and is not restarted by systemd, Components that depend on etcd report connection errors like this: - Request to server http://127.0.0.1:4001 failed: MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=4001): Max retries exceeded with url: /v2/keys/calico/dhcp/v1/subnet?waitIndex=1625&recursive=true&wait=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] ECONNREFUSED',))",) [Root Cause] etcd proxy panic occur in etcd 2.2.5, which is fixed in etcd 2.3.4 (https://github.com/coreos/etcd/pull/5269/files#r62072134), currently only have 2.2.5 in xenial, we need it backported. ** Affects: etcd (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1620897 Title: Fix race on getting close notifier channel To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/etcd/+bug/1620897/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1521958] Re: rabbit: starvation of connections for reply
Used oslo.messaging 1.3.0-0ubuntu1.5 from trusty-proposed, didn't have such problem. ** Tags removed: verification-liberty-done verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1521958 Title: rabbit: starvation of connections for reply To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1521958/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..
Used neutron 1:2014.1.5-0ubuntu4 from trusty-proposed, confirmed that didn't have such problem. ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1393391 Title: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port- update_fanout.. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1393391/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1521958] Re: rabbit: starvation of connections for reply
Hello Corey, Thanks for pointing out that and do all the review, I have assigned this bug to you for your lots of efforts on liberty/kilo/juno/icehouse, the liberty/kilo patches may just cherry-pick from the upstream as I remember, feel free to change the juno/icehouse patches if they need improvements, thanks a lot : ) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1521958 Title: rabbit: starvation of connections for reply To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1521958/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1521958] Re: rabbit: starvation of connections for reply
** Changed in: cloud-archive/juno Assignee: Xiang Hui (xianghui) => Corey Bryant (corey.bryant) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1521958 Title: rabbit: starvation of connections for reply To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1521958/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..
Corey, Thank you very much for your efforts. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1393391 Title: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port- update_fanout.. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1393391/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..
Corey, Thanks for looking at it, I will keep watching it for the new updates and please help to update it if there is any news, thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1393391 Title: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port- update_fanout.. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1393391/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
Hi Corey, Thanks for the fixing. And I am using 'sudo adt-run neutron -U --apt-pocket=proposed --- qemu adt-trusty-amd64-cloud.img -d -o /tmp/neutron/' for running the pkgtest. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
Hello all, After reproducing the test on my environment, the error is reported as: The following packages have unmet dependencies: neutron-plugin-hyperv : Depends: neutron-common (= 1:2014.1.5-0ubuntu1) but 1:2014.1.5-0ubuntu2 is to be installed adt-run: & apt0t-hyperv-plugin: - - - - - - - - - - stderr - - - - - - - - - - E: Unable to correct problems, you have held broken packages. It seems unrelated with the neutron fix in the patch but other dependency problem. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
Ah, maybe I could reproduce here https://wiki.ubuntu.com/ProposedMigration#autopkgtests -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
Hi Serge, Brian, Thanks for looking at it, could the auto package tests re-run again or some other approach to identify the failure, since I can't see how this simple fix can break brocade-plugin showed in http://autopkgtest.ubuntu.com/packages/n/neutron/trusty/armhf/, plus if it has passed the -proposal which if running the neutron tests, it should be work for -updates as well. brocade-plugin FAIL non-zero exit status 1 Thanks guys. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1521958] Re: rabbit: starvation of connections for reply
** Patch added: "trusty-icehouse-lp1521958.debdiff" https://bugs.launchpad.net/oslo.messaging/+bug/1521958/+attachment/4562666/+files/trusty-icehouse-lp1521958.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1521958 Title: rabbit: starvation of connections for reply To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1521958/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1521958] Re: rabbit: starvation of connections for reply
Add dep3 header. ** Patch added: "trusty-juno-lp1521958.debdiff" https://bugs.launchpad.net/oslo.messaging/+bug/1521958/+attachment/4562665/+files/trusty-juno-lp1521958.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1521958 Title: rabbit: starvation of connections for reply To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1521958/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
Used neutron 1:2014.1.5-0ubuntu2 and oslo.messaging 1.3.0-0ubuntu1.4 from trusty-proposed, confirmed that didn't have such problem. ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Patch added: "backport-1318721-trusty-icehouse-neutron.debdiff" https://bugs.launchpad.net/neutron/+bug/1318721/+attachment/4546093/+files/backport-1318721-trusty-icehouse-neutron.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Patch added: "backport-1318721-trusty-icehouse.debdiff" https://bugs.launchpad.net/neutron/+bug/1318721/+attachment/4546092/+files/backport-1318721-trusty-icehouse.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Patch added: "backport-1318721-trusty-juno.debdiff" https://bugs.launchpad.net/neutron/+bug/1318721/+attachment/4546091/+files/backport-1318721-trusty-juno.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Branch linked: lp:~xianghui/ubuntu/trusty/oslo.messaging/lp1318721 ** Branch unlinked: lp:ubuntu/trusty-proposed/oslo.messaging -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Branch linked: lp:~xianghui/ubuntu/trusty/neutron/lp1318721 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Branch linked: lp:~xianghui/ubuntu/trusty/oslo.messaging/juno- lp1318721 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Branch linked: lp:ubuntu/trusty-proposed/oslo.messaging -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
Neutron is using kombu other than oslo library as rpc, backport the fix to neutron kombu as well. ** Patch added: "neutron-trusty-icehouse.debdiff" https://bugs.launchpad.net/neutron/+bug/1318721/+attachment/4536168/+files/neutron-trusty-icehouse.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Patch added: "trusty-icehouse.debdiff" https://bugs.launchpad.net/neutron/+bug/1318721/+attachment/4536166/+files/trusty-icehouse.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Patch added: "trusty-juno.debdiff" https://bugs.launchpad.net/neutron/+bug/1318721/+attachment/4536162/+files/trusty-juno.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1318721 Title: RPC timeout in all neutron agents To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1318721] Re: RPC timeout in all neutron agents
** Description changed: In the logs the first traceback that happen is this: [-] Unexpected exception occurred 1 time(s)... retrying. Traceback (most recent call last): - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/excutils.py", line 62, in inner_func - return infunc(*args, **kwargs) - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 741, in _consumer_thread - - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 732, in consume - @excutils.forever_retry_uncaught_exceptions - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 660, in iterconsume - try: - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 590, in ensure - def close(self): - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 531, in reconnect - # to return an error not covered by its transport - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 513, in _connect - Will retry up to self.max_retries number of times. - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 150, in reconnect - use the callback passed during __init__() - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", line 508, in declare - self.queue_bind(nowait) - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", line 541, in queue_bind - self.binding_arguments, nowait=nowait) - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", line 551, in bind_to - nowait=nowait) - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py", line 1003, in queue_bind - (50, 21), # Channel.queue_bind_ok - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py", line 68, in wait - return self.dispatch_method(method_sig, args, content) - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py", line 86, in dispatch_method - return amqp_method(self, args) - File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py", line 241, in _close - reply_code, reply_text, (class_id, method_id), ChannelError, + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/excutils.py", line 62, in inner_func + return infunc(*args, **kwargs) + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 741, in _consumer_thread + + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 732, in consume + @excutils.forever_retry_uncaught_exceptions + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 660, in iterconsume + try: + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 590, in ensure + def close(self): + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 531, in reconnect + # to return an error not covered by its transport + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 513, in _connect + Will retry up to self.max_retries number of times. + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 150, in reconnect + use the callback passed during __init__() + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", line 508, in declare + self.queue_bind(nowait) + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", line 541, in queue_bind + self.binding_arguments, nowait=nowait) + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", line 551, in bind_to + nowait=nowait) + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py", line 1003, in queue_bind + (50, 21), # Channel.queue_bind_ok + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py", line 68, in wait + return self.dispatch_method(method_sig, args, content) + File "/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py", line 86, in
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
** Description changed: - This bug is seperate from bug - https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532 to focus - on the error. + [ENV] + Ubuntu 15.04 + Linux 3.19.0-30-generic + libvirtd (libvirt) 1.2.12 + QEMU emulator version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.7) + qemu-kvm 1:2.2+dfsg-5expubuntu9.7 + qemu-system-x86 1:2.2+dfsg-5expubuntu9.7 + libvirt-bin 1.2.12-0ubuntu14.3 + nova installed from git source stable/liberty + The cloud-archives coming from official vivid. + + [Note] + It is not just me who is using 15.04 have this issue, anyone who is trying to use Ubuntu to deploy OVS-DPDK enabled vms are blocking here, ubuntu trusty is also a known issue version as well. + It looks like an apparmor related issue. + + [OVS-DISCUSS] + http://openvswitch.org/pipermail/discuss/2015-August/018560.html + + + This bug is seperate from bug https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532 to focus on the error. I have been trying to redeploy the environment, for this time, the error is not about kvm-spice, it is qemu-system-x86_64 got the same problem. Due to /usr/bin/qemu-system-x86_64 is a bin file, I am not able to put the strace there. And I don't know why suddenly /usr/bin/kvm-spice has been replaced by /usr/bin/qemu-systerm-x86_64. 2015-11-05 15:36:15.491 DEBUG nova.compute.utils [req-b292f304-014b-479f-af5d-38b96309f78f admin admin] [instance: 3dceb341-643d-492a-8a47-8154da341c02] internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-3dceb341-643d-492a-8a47-8154da341c02' for '/usr/bin/qemu-system-x86_64': No such file or directory - from (pid=12236) notify_about_instance_usage /opt/stack/nova/nova/compute/utils.py:284 + from (pid=12236) notify_about_instance_usage /opt/stack/nova/nova/compute/utils.py:284 2015-11-05 15:36:15.492 DEBUG nova.compute.manager [req-b292f304-014b-479f-af5d-38b96309f78f admin admin] [instance: 3dceb341-643d-492a-8a47-8154da341c02] Build of instance 3dceb341-643d-492a-8a47-8154da341c02 was re-scheduled: internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-3dceb341-643d-492a-8a47-8154da341c02' for '/usr/bin/qemu-system-x86_64': No such file or directory Let me know what's the next step for further analysis. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
@Serge "this happens to you always, or only occasionally? Always on the same host?" - Yes, it always happen, not just me, it happened to any people who is using Ubuntu to deploy OVS-DPDK enabled vms, it's kind of critical. Discuss email http://openvswitch.org/pipermail/discuss/2015-August/018560.html The different when you ty to spawn vms from xml file might be, for OVS- DPDK, we use qemu vhost-user feature and hugepages. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
@sean You are correct, I am uploading the fresh libvirt xml file and part of the nova-compute related logs. ** Attachment added: "instance_spawn_failed_log" https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+attachment/4531275/+files/instance_spawn_failed_log -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
@Chuck There is no libvirt xml file because the vm is spawned failed finally, and the content in the bug description is the accurate error report from nova-compute node, thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
** Tags added: dpdk -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
@Serge Hello, is the logs uploaded enough for you? let me know if you need more, thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
** Attachment added: "1513367-20151107.tar.gz" https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+attachment/4514742/+files/1513367-20151107.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
All the outputs ar euploaded into 1513367-20151107.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
@Serge AFter use the qemu-system-x86_64 with strace, the error as below: 2015-11-06 15:55:12.681 ERROR nova.compute.manager [req-b2e4d8e4-70d2-40b7-814c-409ae1720729 None None] Error updating resources for node panghua-CS24-TY: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin /usr/bin/qemu-system-x86_64 -help) unexpected exit status 1: execve("/usr/bin/qemu-system-x86_64", ["/usr/bin/qemu-system-x86_64", "-help"], [/* 3 vars */]) = 0 brk(0) = 0x7f6b6d25 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f6b6bf56000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=95906, ...}) = 0 mmap(NULL, 95906, PROT_READ, MAP_PRIVATE, 4, 0) = 0x7f6b6bf3e000 close(4)= 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 4 read(4, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\v\2\0\0\0\0\0"..., 832) = 832 fstat(4, {st_mode= And I spawn a vm by setting security_driver=None in qemu, below stance-0027.log is the successful one, comparing with instance- 001e.log, basically only uuid or names different, same configuration of smbios. 2015-11-06 09:29:41.882+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-x86_64 -name instance-0027 -S -machine pc-i440fx-utopic,accel=kvm,usb=off -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -object memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=on,size=2048M,id=ram-node0,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0,memdev=ram-node0 -uuid 6c80c9ec-4445-4101-99c9-6339cb2f56a9 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=12.0.0,serial=e87d7510-5766-e35e-8016-ebeb55d7deff,uuid=6c80c9ec-4445-4101-99c9-6339cb2f56a9,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0027.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0, addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/6c80c9ec-4445-4101-99c9-6339cb2f56a9/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/opt/stack/data/nova/instances/6c80c9ec-4445-4101-99c9-6339cb2f56a9/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu5392206b-dc -netdev type=vhost-user,id=hostnet0,chardev=charnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:e5:41:f1,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/6c80c9ec-4445-4101-99c9-6339cb2f56a9/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x 2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on char device redirected to /dev/pts/41 (label charserial1) 2015-11-06T09:29:42.486509Z qemu-system-x86_64: -netdev type=vhost-user,id=hostnet0,chardev=charnet0: chardev "charnet0" went up Is their anyway to test the apparmor with libvirt? Thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1384532] Re: Unable to set AppArmor profile for /usr/bin/kvm-spice
@Serge I have opened a new bug https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367 to focus on this error, let's discuss there. Thanks a lot. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1384532 Title: Unable to set AppArmor profile for /usr/bin/kvm-spice To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] Re: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
** Attachment added: "1513367.tar.gz" https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+attachment/4513272/+files/1513367.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1513367] [NEW] qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled
Public bug reported: This bug is seperate from bug https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532 to focus on the error. I have been trying to redeploy the environment, for this time, the error is not about kvm-spice, it is qemu-system-x86_64 got the same problem. Due to /usr/bin/qemu-system-x86_64 is a bin file, I am not able to put the strace there. And I don't know why suddenly /usr/bin/kvm-spice has been replaced by /usr/bin/qemu-systerm-x86_64. 2015-11-05 15:36:15.491 DEBUG nova.compute.utils [req-b292f304-014b-479f-af5d-38b96309f78f admin admin] [instance: 3dceb341-643d-492a-8a47-8154da341c02] internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-3dceb341-643d-492a-8a47-8154da341c02' for '/usr/bin/qemu-system-x86_64': No such file or directory from (pid=12236) notify_about_instance_usage /opt/stack/nova/nova/compute/utils.py:284 2015-11-05 15:36:15.492 DEBUG nova.compute.manager [req-b292f304-014b-479f-af5d-38b96309f78f admin admin] [instance: 3dceb341-643d-492a-8a47-8154da341c02] Build of instance 3dceb341-643d-492a-8a47-8154da341c02 was re-scheduled: internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-3dceb341-643d-492a-8a47-8154da341c02' for '/usr/bin/qemu-system-x86_64': No such file or directory Let me know what's the next step for further analysis. ** Affects: libvirt (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1513367 Title: qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1384532] Re: Unable to set AppArmor profile for /usr/bin/kvm-spice
@Serge Sorry, I was travelling for OpenStack summit last week, testing now and I will open a new bug by following your instruction to make it more clear. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1384532 Title: Unable to set AppArmor profile for /usr/bin/kvm-spice To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1384532] Re: Unable to set AppArmor profile for /usr/bin/kvm-spice
Hi Serge, Thanks for taking a look, I got it work around by set 'security_group=None' in /etc/libvirt/qemu.conf, but I am still trying to find a right solution to fix it. Same error: 2015-10-23 15:40:41.821 DEBUG nova.compute.utils [req-f039b3d1-f34f-4f7e-b983-78d730393e6e admin admin] [instance: b0a88061-45b6-417f-9c99-c6f07c115210] internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-b0a88061-45b6-417f-9c99-c6f07c115210' for '/usr/bin/kvm-spice': No such file or directory from (pid=12748) notify_about_instance_usage /opt/stack/nova/nova/compute/utils.py:284 2015-10-23 15:40:41.822 DEBUG nova.compute.manager [req-f039b3d1-f34f-4f7e-b983-78d730393e6e admin admin] [instance: b0a88061-45b6-417f-9c99-c6f07c115210] Build of instance b0a88061-45b6-417f-9c99-c6f07c115210 was re-scheduled: internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-b0a88061-45b6-417f-9c99-c6f07c115210' for '/usr/bin/kvm-spice': No such file or directory 2015-10-23 15:40:41.432 TRACE nova.compute.manager [instance: b0a88061-45b6-417f-9c99-c6f07c115210] libvirtError: internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-b0a88061-45b6-417f-9c99-c6f07c115210' for '/usr/bin /kvm-spice': No such file or directory -> But actually exists libvirt-b0a88061-45b6-417f-9c99-c6f07c115210: $ sudo ls /etc/apparmor.d/libvirt/libvirt-b0a88061-45b6-417f-9c99-c6f07c115210 libvirt-b0a88061-45b6-417f-9c99-c6f07c115210 libvirt-b0a88061-45b6-417f-9c99-c6f07c115210.files Also I debug on the kvm-spice, it's suspicious got wrong at the last line: exec /usr/bin/qemu-system-x86_64 "${args[@]}" which finally translater into: exec /usr/bin/qemu-system-x86_64 -S -no-user-config -nodefaults -nographic -M none -qmp unix:/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait -pidfile /var/lib/libvirt/qemu/capabilities.pidfile -daemonize -> Just guess kvm-spice doesn't have the perssion to access /var/lib/libvirt/qemu $ sudo ls -l /var/lib/libvirt/qemu/ total 16 drwxr-xr-x 3 root root 4096 Oct 10 09:55 channel drwxr-xr-x 2 root root 4096 Oct 9 17:13 dump srwxrwxr-x 1 libvirt-qemu kvm 0 Oct 22 18:22 instance-0028.monitor srwxrwxr-x 1 libvirt-qemu kvm 0 Oct 22 21:24 instance-0029.monitor drwxr-xr-x 2 libvirt-qemu kvm 4096 Oct 9 17:13 save drwxr-xr-x 2 libvirt-qemu kvm 4096 Oct 9 17:13 snapshot For the successfully spawned vms work around by setting security_driver to None, there are instance-xxx.monitor created, so I wonder if the problem is kvm-spice has no permission to create the monitor there or someelse. libvirt.log and failed vm qemu.log are attached. Thank you! ** Attachment added: "kvm-spice-error.tar.gz" https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532/+attachment/4502945/+files/kvm-spice-error.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1384532 Title: Unable to set AppArmor profile for /usr/bin/kvm-spice To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1384532] Re: Poor error reporting when cd file not found.
Hi, I have met exactly the same issue on Ubuntu vivid with OpenStack vms, Failed message: "unable to set AppArmor profile 'libvirt-b5de9738-608d-44df-99e7-596f907dcff2' for '/usr/bin/kvm-spice': No such file or directory" But actually the profile exists: $ ll /etc/apparmor.d/libvirt/libvirt-b5de9738-608d-44df-99e7-596f907dcff2 libvirt-b5de9738-608d-44df-99e7-596f907dcff2 libvirt-b5de9738-608d-44df-99e7-596f907dcff2.files $ ls -l /usr/bin/kvm-spice lrwxrwxrwx 1 root root 3 Sep 24 20:25 /usr/bin/kvm-spice -> kvm $ ls -l /usr/bin/kvm -rwxr-xr-x 1 root root 811 Oct 15 23:03 /usr/bin/kvm I am not understanding what does 'It folder name contained chars: "- = { }". After changing the folder name to "Windows Server 2012 R2", the vm started.' mean how it work out the problem. My vm is a cirros, it seems unrelated. There are also other people hit this problem, but they figure it out temporarily either by disable apparmor or by purge apparmor, please take a look and point me a right solution, thanks a lot. ** Summary changed: - Poor error reporting when cd file not found. + Unable to set AppArmor profile for /usr/bin/kvm-spice -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1384532 Title: Unable to set AppArmor profile for /usr/bin/kvm-spice To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1384532/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1473109] Re: uio_pci_generic is not available in trusty kernel package for ppc64el
modprobe failed on my Ubuntu trusty environment, can anyone help take a look? [OS] # uname -a Linux juju-xianghui-machine-11 3.13.0-63-generic #103-Ubuntu SMP Fri Aug 14 21:42:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux [TEST] # modprobe uio_pci_generic modprobe: FATAL: Module uio_pci_generic not found. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1473109 Title: uio_pci_generic is not available in trusty kernel package for ppc64el To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1473109/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1341496] Re: corosync hangs inside libqb
Hi guys, I didn't find the libqb0 in trusty-backports, anyone can help show me the link or is it still in the progress? Thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1341496 Title: corosync hangs inside libqb To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libqb/+bug/1341496/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1429832] [NEW] Can't create gre and vxlan type tenant network once deployed
Public bug reported: [OS + Charm version] trusty-icehouse [Tool] lp:~ost-maintainers/openstack-charm-testing/trunk [Steps] 1. juju-deployer -c next.yaml -d trusty-icehouse 2. ./configure 3. create a vxlan tenant network [Result] create vxlan tenant network failed [Analyzation] The definition of ' overlay-network-type' in config.yaml of neutron-api shows as below, which is default set as 'gre', currently as it described choose one of the value, however, the tunnel_types could be a list in neutron ml2_conf.ini, such as 'tunnel_types = gre,vxlan', so the default value of 'overlay-network-type' should be like a list of 'gre vxlan'. overlay-network-type: default: gre type: string description: | Overlay network type to use choose one of: . gre vxlan ' Affected file: neutron-api config.yaml get_overlay_network_type() in neutron-api-context.py neutron-api-hooks.py neutron-gateway quantum_context.py ** Affects: neutron-api (Juju Charms Collection) Importance: Undecided Assignee: Edward Hope-Morley (hopem) Status: New ** Affects: neutron-gateway (Juju Charms Collection) Importance: Undecided Assignee: Edward Hope-Morley (hopem) Status: New ** Tags: cts openstack ** Tags added: cts openstack ** Also affects: neutron-gateway (Ubuntu) Importance: Undecided Status: New ** Changed in: neutron-api (Juju Charms Collection) Assignee: (unassigned) => Edward Hope-Morley (hopem) ** No longer affects: neutron-gateway (Ubuntu) ** Also affects: neutron-gateway (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: neutron-gateway (Juju Charms Collection) Assignee: (unassigned) => Edward Hope-Morley (hopem) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1429832 Title: Can't create gre and vxlan type tenant network once deployed To manage notifications about this bug go to: https://bugs.launchpad.net/charms/+source/neutron-api/+bug/1429832/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1405588] Re: database connection failed (Protocol error)
** Also affects: quantum-gateway (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: nova-compute (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: quantum-gateway (Juju Charms Collection) Importance: Undecided => Critical ** Changed in: nova-compute (Juju Charms Collection) Importance: Undecided => Critical ** Also affects: neutron-openvswitch (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: neutron-openvswitch (Juju Charms Collection) Importance: Undecided => Critical ** Changed in: neutron-openvswitch (Juju Charms Collection) Assignee: (unassigned) => Hua Zhang (zhhuabj) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1405588 Title: database connection failed (Protocol error) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1405588/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1405588] Re: database connection failed (Protocol error)
** Description changed: Versions: root@juju-xh-machine-5:~# dpkg -l|grep openvswitch ii openvswitch-common 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch common components ii openvswitch-switch 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch switch implementations root@juju-xh-machine-5:~# uname -a Linux juju-precise-machine-5 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux root@juju-xh-machine-5:~# cat /etc/issue Ubuntu 14.04.1 LTS \n \l + + root@juju-precise-machine-5:/var/log/neutron# dpkg -l|grep neutron + ii neutron-common 1:2014.1.3-0ubuntu1.1 all Neutron is a virtual network service for Openstack - common + ii neutron-plugin-ml2 1:2014.1.3-0ubuntu1.1 all Neutron is a virtual network service for Openstack - ML2 plugin + ii neutron-plugin-openvswitch-agent 1:2014.1.3-0ubuntu1.1 all Neutron is a virtual network service for Openstack - Open vSwitch plugin agent + ii python-neutron 1:2014.1.3-0ubuntu1.1 all Neutron is a virutal network service for Openstack - Python Erros: root@juju-xh-machine-5:~# ovs-vsctl show 2014-12-25T09:26:56Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (Protocol error) ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Protocol error) root@juju-precise-machine-5:~# tail -f /var/log/neutron/openvswitch-agent.log 2014-12-25 09:37:03.990 20991 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End of file) 2014-12-25 09:44:53.796 20991 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: 2014-12-25T09:44:53Z|1|fatal_signal|WARN|terminating with signal 15 (Terminated) Workaround: restart openvswitch-switch But ovs-vsctl will get this error every several minutes, this caused neutron-plugin-openvswitch-agent keep reporting errors since ovs-vsctl is wrong. + + The root cause is : + Neutron agent will call ovs monitor to get interfaces info, but ovs monitor process will be respawned every 'respawn_interval' minutes due to get error output 'sudo: unable to resolve host juju-precise-machine-5' when running command 'sudo neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface name,ofport --format=json', after several times the ovs db socket is reconnected for multiple times, however, some of them are not released well, and it goes to the maxim limit of socket number, which leads to connect to ovsdb server failed. + + Currently, one possible fix in code is to add hostname to /etc/hosts + followed by 127.0.0.1 or localhost to avoid the error 'sudo: unable to + resolve host juju-precise-machine-5' in openvswitch charm. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1405588 Title: database connection failed (Protocol error) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1405588/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1405588] Re: database connection failed (Protocol error)
** Description changed: Versions: root@juju-xh-machine-5:~# dpkg -l|grep openvswitch ii openvswitch-common 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch common components ii openvswitch-switch 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch switch implementations root@juju-xh-machine-5:~# uname -a Linux juju-precise-machine-5 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux root@juju-xh-machine-5:~# cat /etc/issue Ubuntu 14.04.1 LTS \n \l Erros: root@juju-xh-machine-5:~# ovs-vsctl show 2014-12-25T09:26:56Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (Protocol error) ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Protocol error) root@juju-precise-machine-5:~# tail -f /var/log/neutron/openvswitch-agent.log 2014-12-25 09:37:03.990 20991 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End of file) + 2014-12-25 09:44:53.796 20991 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: 2014-12-25T09:44:53Z|1|fatal_signal|WARN|terminating with signal 15 (Terminated) Workaround: restart openvswitch-switch But ovs-vsctl will get this error every several minutes, this caused neutron-plugin-openvswitch-agent keep reporting errors since ovs-vsctl is wrong. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1405588 Title: database connection failed (Protocol error) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1405588/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1405588] Re: database connection failed (Protocol error)
** Description changed: + Versions: root@juju-xh-machine-5:~# dpkg -l|grep openvswitch ii openvswitch-common 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch common components ii openvswitch-switch 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch switch implementations root@juju-xh-machine-5:~# uname -a Linux juju-precise-machine-5 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux root@juju-xh-machine-5:~# cat /etc/issue Ubuntu 14.04.1 LTS \n \l + Erros: root@juju-xh-machine-5:~# ovs-vsctl show 2014-12-25T09:26:56Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (Protocol error) ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Protocol error) + root@juju-precise-machine-5:~# tail -f /var/log/neutron/openvswitch-agent.log + 2014-12-25 09:37:03.990 20991 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End of file) - workaround: + Workaround: restart openvswitch-switch - But ovs-vsctl will get this error every several minutes. + But ovs-vsctl will get this error every several minutes, this caused + neutron-plugin-openvswitch-agent keep reporting errors since ovs-vsctl + is wrong. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1405588 Title: database connection failed (Protocol error) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1405588/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1405588] [NEW] database connection failed (Protocol error)
Public bug reported: Versions: root@juju-xh-machine-5:~# dpkg -l|grep openvswitch ii openvswitch-common 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch common components ii openvswitch-switch 2.0.2-0ubuntu0.14.04.1 amd64 Open vSwitch switch implementations root@juju-xh-machine-5:~# uname -a Linux juju-precise-machine-5 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux root@juju-xh-machine-5:~# cat /etc/issue Ubuntu 14.04.1 LTS \n \l Erros: root@juju-xh-machine-5:~# ovs-vsctl show 2014-12-25T09:26:56Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (Protocol error) ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Protocol error) root@juju-precise-machine-5:~# tail -f /var/log/neutron/openvswitch-agent.log 2014-12-25 09:37:03.990 20991 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End of file) Workaround: restart openvswitch-switch But ovs-vsctl will get this error every several minutes, this caused neutron-plugin-openvswitch-agent keep reporting errors since ovs-vsctl is wrong. ** Affects: openvswitch (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1405588 Title: database connection failed (Protocol error) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1405588/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1380747] Re: SST on IPv6 fails for xtrabackup-v2
The TCP-LISTEN is not used only for ipv4, it could be set with export "SOCAT_DEFAULT_LISTEN_IP={4,6}" for socat to listening on different ip protocol, so no need to create a new wsrep_sst_xtrabackup-v2-ipv6 plugin. xianghui@Thinkpad-x240:~/workplace/github$ export SOCAT_DEFAULT_LISTEN_IP=6 xianghui@Thinkpad-x240:~/workplace/github$ socat -u TCP-LISTEN:,reuseaddr stdio xianghui@Thinkpad-x240:~/workplace/github$ telnet :: Trying ::... Connected to ::. Escape character is '^]'. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1380747 Title: SST on IPv6 fails for xtrabackup-v2 To manage notifications about this bug go to: https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1380747/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1312507] Re: rabbitmq-server fails to start on a IPv6-Only environment/epmd is not IPv6 enabled
Verified, works for me, thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1312507 Title: rabbitmq-server fails to start on a IPv6-Only environment/epmd is not IPv6 enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/erlang/+bug/1312507/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1204456] Re: neutron ml2 plugin test failures
Hi, When I ran run_tests.sh on ubuntu, there are always fails as below: Seems it looks like the failure above. Is there any ways to fix this? because it has blocked my commit. Thanks in advance. root@ubuntu:/opt/stack/neutron# ./run_tests.sh == FAIL: process-returncode process-returncode -- _StringException: Binary content: traceback (test/plain; charset="utf8") == FAIL: process-returncode process-returncode -- _StringException: Binary content: traceback (test/plain; charset="utf8") == FAIL: neutron.tests.unit.ml2.test_agent_scheduler.Ml2AgentSchedulerTestCase.test_network_add_to_dhcp_agent neutron.tests.unit.ml2.test_agent_scheduler.Ml2AgentSchedulerTestCase.test_network_add_to_dhcp_agent -- _StringException == FAIL: neutron.tests.unit.ml2.test_security_group.TestMl2SecurityGroups.test_create_security_group_rule_tcp_protocol_as_number neutron.tests.unit.ml2.test_security_group.TestMl2SecurityGroups.test_create_security_group_rule_tcp_protocol_as_number -- _StringException -- -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1204456 Title: neutron ml2 plugin test failures To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1204456/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs