[Yahoo-eng-team] [Bug 1524004] [NEW] linuxbridge agent does not wire ports for non-traditional device owners
Public bug reported: A recent change [1] made the wiring super restrictive to network: and neutron: this resulted in external systems that use other device owners from getting wired. [1] https://review.openstack.org/#/c/193485/ ** Affects: neutron Importance: Medium Assignee: Mark McClain (markmcclain) Status: In Progress ** Tags: linuxbridge ** Description changed: - A recent change made the wiring super restrictive to network: and + A recent change [1] made the wiring super restrictive to network: and neutron: this resulted in external systems that use other device owners from getting wired. + + [1] https://review.openstack.org/#/c/193485/ -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1524004 Title: linuxbridge agent does not wire ports for non-traditional device owners Status in neutron: In Progress Bug description: A recent change [1] made the wiring super restrictive to network: and neutron: this resulted in external systems that use other device owners from getting wired. [1] https://review.openstack.org/#/c/193485/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1524004/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236439] Re: switch to use hostnames like nova breaks upgrades of l3-agent
** Changed in: neutron Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1236439 Title: switch to use hostnames like nova breaks upgrades of l3-agent Status in ubuntu-cloud-archive: Won't Fix Status in neutron: Won't Fix Status in Release Notes for Ubuntu: Fix Released Status in neutron package in Ubuntu: Won't Fix Status in neutron source package in Saucy: Won't Fix Bug description: Commit https://github.com/openstack/neutron/commit/140029ebd006c116ee684890dd70e13b7fc478ec switch to using socket.gethostname() for the name of neutron agents; this has the unfortunate side effect with the l3-agent that all router services are no longer scheduled on an active agent, resulting in floating ip and access outages. Looks like this will effect upgrades from grizzly->havana as well: ubuntu@churel:/etc/maas$ quantum agent-list +--++--+---++ | id | agent_type | host | alive | admin_state_up | +--++--+---++ | 02ad1175-209c-4125-889a-e390a15ecd50 | Open vSwitch agent | caipora.1ss.qa.lexington | xxx | True | | 191d4757-05f6-4170-a78d-d6a3c1b9265e | Open vSwitch agent | canaima | :-) | True | | 306cbfbb-8879-4d64-ac26-db007f9113a9 | DHCP agent | cofgod.1ss.qa.lexington | xxx | True | | 32081821-1e94-4274-993b-b0bf2714e5ac | Open vSwitch agent | ciguapa.1ss.qa.lexington | xxx | True | | 5697a23a-712e-4de3-a218-2a6c177bf555 | Open vSwitch agent | chakora | :-) | True | | 5ea5e207-1da0-47e3-9a7e-984589b11300 | Open vSwitch agent | cuegle.1ss.qa.lexington | xxx | True | | 71e31354-76e7-4640-9a5b-368678bc22d0 | Open vSwitch agent | canaima.1ss.qa.lexington | xxx | True | | 7267e3d2-d9bf-4e57-8d19-803aab636f36 | Open vSwitch agent | chakora.1ss.qa.lexington | xxx | True | | 75ff2563-f5a5-4df3-aa19-fe8310146c10 | Open vSwitch agent | cuegle | :-) | True | | 875de52e-d6c3-4e82-8cbd-269831ff00bc | Open vSwitch agent | cofgod | :-) | True | | 9afaf6f2-2756-4863-b5d0-7faba502e878 | L3 agent | cofgod | :-) | True | | a81ac370-a318-42e4-9279-eef2b6141644 | Open vSwitch agent | cofgod.1ss.qa.lexington | xxx | True | | d6e6332e-822a-438e-8613-16013da825e0 | L3 agent | cofgod.1ss.qa.lexington | xxx | True | | d9712755-03b3-4326-99c1-3bf66c878dc6 | Open vSwitch agent | ciguapa | :-) | True | | dadf284c-ac8f-4dc1-9ba4-73182e5f1911 | DHCP agent | cofgod | :-) | True | | ed07ff1a-dcca-4bbd-b026-1296bb90f89b | Open vSwitch agent | caipora | :-) | True | +--++--+---++ To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1236439/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1380787] [NEW] remove XML support
Public bug reported: XML support has been deprecated for Icehouse[1] and Juno[2]. It is time to remove it in Kilo. [1] https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse#Upgrade_Notes_6 [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_6 ** Affects: neutron Importance: High Assignee: Mark McClain (markmcclain) Status: In Progress ** Tags: neutron-core xml -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1380787 Title: remove XML support Status in OpenStack Neutron (virtual network service): In Progress Bug description: XML support has been deprecated for Icehouse[1] and Juno[2]. It is time to remove it in Kilo. [1] https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse#Upgrade_Notes_6 [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_6 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1380787/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1378855] [NEW] juno capstone migration is missing
Public bug reported: The Juno capstone migration is missing. ** Affects: neutron Importance: Critical Assignee: Mark McClain (markmcclain) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1378855 Title: juno capstone migration is missing Status in OpenStack Neutron (virtual network service): In Progress Bug description: The Juno capstone migration is missing. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1378855/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1374398] Re: Non admin user can update router port
If a tenant wants to change the characteristics of a router port they have to clean up resulting mess. I don't see this as a bug. ** Changed in: neutron Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1374398 Title: Non admin user can update router port Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: Non admin user can update router's port http://paste.openstack.org/show/115575/. This can caused problems as server's won't get information about this change until next DHCP request so connectivity to and from this network will be lost. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1374398/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1253993] Re: OVS agent loop slowdown
** Changed in: neutron Status: In Progress = Fix Released ** Changed in: neutron Milestone: None = juno-3 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1253993 Title: OVS agent loop slowdown Status in OpenStack Neutron (virtual network service): Fix Released Bug description: This is probably a regression of bug #1194438 From several checks with parallel jobs, it seems agent loop becomes slower across iterations. Slowness is not due to plugin RPC calls, which might be optimized anyway, but is mainly due to applying iptables/ovs configurations. This consistently occurs in tempest tests with parallelism enabled. Example logs here: http://logs.openstack.org/20/57420/5/experimental /check-tempest-devstack-vm-neutron-isolated-parallel/54c6db3/logs 1 - OVS AGENT - Iteration #33 starts at 1:36:03 2 - Nova API - Server POST request at 1.39.42 3 - Neutron Server - Neutron POST /ports at 1.39.44 4 - OVS Agent OVS DB monitor detects instance's tap a 1.39.45 5 - OVS Agent - Iteration #33 completes processing device filters (security groups) at 1.39.51 6 - Nova API - Server ACTIVE by at 1.39.55 7 - Neutron Server - Floating IP POST at 1.39.55 8 - Neutron L3/VPN Agent - Floating IP ready at 1.40.37 (42 seconds - this should be another investigation) 9 - OVS Agent - Iteration #33 on OVS agent complete processing devices at 1.40.16 NOTE: The added device was not processed because the iteration started before the device was detected 10 - TEMPEST - TIMEOUT ON TEST at 1.40.56 - connection failed because internal port not wired 11 - OVS Agent - Iteration #33 completes processing ancillary ports at 1:42:07 12 - OVS Agent - Iteration #34 starts at 1:42:08 13 - OVS Agent - Iteration #34 completes processing device filters at 1:43:35 14 - The wiring of the interface for the server is not captured by the logs as the tempest test completed in the meanwhile. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1253993/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1288641] Re: ethernet name parsing doesn't support ethernet alias
aliases are for addressing... the underlying device is still the same. The proper way to simulate multiple external networks is to create a bridge and use tap devices ** Changed in: neutron Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1288641 Title: ethernet name parsing doesn't support ethernet alias Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: I set up a testbed for Neutron ML2 LinuxBridge. I'd like to set two external networks which is implemented by ethernet alias: eth0:0 and eth0:1. I bind the ethernet name to provider network: phynet1 and phynet2. However, it is not working. The LinuxBridge agent is terminated and throws an exception: 2014-03-06 16:41:48.675 6793 ERROR neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent [-] Parsing physical_interface_mappings failed: Invalid mapping: 'phynet1:eth0:0'. Agent terminated! To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1288641/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1329017] [NEW] new pep8/flake8/hacking requires code update
Public bug reported: A new version of hacking has been released into the global requirements for OpenStack [1]. The newer version has additional stylistic checks which the current code does not match. The commit was amended to add exceptions for the rules, but we should enable the rules as soon as possible. [1] https://review.openstack.org/#/c/96823/ ** Affects: neutron Importance: Low Assignee: Mark McClain (markmcclain) Status: Triaged -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1329017 Title: new pep8/flake8/hacking requires code update Status in OpenStack Neutron (virtual network service): Triaged Bug description: A new version of hacking has been released into the global requirements for OpenStack [1]. The newer version has additional stylistic checks which the current code does not match. The commit was amended to add exceptions for the rules, but we should enable the rules as soon as possible. [1] https://review.openstack.org/#/c/96823/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1329017/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1292397] Re: Difference in index creation for foreign key in MySQL and Postgres
I believe this is a not bug, but a difference in the implementation of the databases. Unless there is a decided performance reason, I do not see the need to add code to make the databases look the same. Postgres creates an index on the primary key of the related table. ** Changed in: neutron Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1292397 Title: Difference in index creation for foreign key in MySQL and Postgres Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: In some migration such as 39cf3f799352_fwaas_havana_2_model, 569e98a8132b_metering, 1064e98b7917_nec_pf_port_del. ForeignKeyConstraint is created with name, but in models this name isn't mentioned. MySQL creates index when it creates ForeignKeyConstraint. So in database for MySQL there are special indexes, but there are no such indexes in database for PostgreSQL http://paste.openstack.org/show/76906/. This cause difference between models and migrations and between database content for MySQL and PostgreSQL. If ForeignKey is created without name there is no such problem. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1292397/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1304374] [NEW] rpc_workers config option needs warning that it is experimental in Icehouse
Public bug reported: rpc_workers is an experimental option in Icehouse and we should note it as such in the configuration file. ** Affects: neutron Importance: High Assignee: Carl Baldwin (carl-baldwin) Status: In Progress ** Tags: neutron-core ** Changed in: neutron Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1304374 Title: rpc_workers config option needs warning that it is experimental in Icehouse Status in OpenStack Neutron (virtual network service): In Progress Bug description: rpc_workers is an experimental option in Icehouse and we should note it as such in the configuration file. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1304374/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1292598] Re: rootwrap massive overhead limits neutron scalability
This should be fixed via the blueprint process rather than a bug. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1292598 Title: rootwrap massive overhead limits neutron scalability Status in OpenStack Neutron (virtual network service): Invalid Bug description: Permission elevation via rootwrap, has a massive impact on the network nodes, increasing setup time 2.5 times compared to plain sudo. [2] [3] A network node with 192 private networks + 192 routers takes: - 24 minutes to setup with rootwrap - 10 minutes to setup with just sudo Rootwrap need is clear, from the security point of view, but an optimization is required from the performance point of view [1] Appendix: [1] https://etherpad.openstack.org/p/neutron-agent-exec-performance [2] mail list discussions: a) http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html b) http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html [3] [root@rhos4-neutron2 ~]# time neutron-rootwrap --help /usr/bin/neutron-rootwrap: No command specified real0m0.309s user0m0.128s sys0m0.037s [root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)' real0m0.057s user0m0.016s sys0m0.011s [root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0' real0m0.032s user0m0.010s sys0m0.019s [root@rhos4-neutron2 ~]# echo int main() { return 0; } test.c [root@rhos4-neutron2 ~]# gcc test.c -o test [root@rhos4-neutron2 ~]# time test # to time process invocation on this machine real0m0.000s user0m0.000s sys0m0.000s To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1292598/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1299145] Re: Documentation Bug, BigSwitch should be Big Switch
** Tags removed: icehouse-backport-potential ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Changed in: neutron Milestone: None = juno-1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1299145 Title: Documentation Bug, BigSwitch should be Big Switch Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: BigSwitch references should be changed to Big Switch. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1299145/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297875] Re: some tests call called_once_with_args with no assert, those lines are ignored
** Changed in: neutron Milestone: None = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1297875 Title: some tests call called_once_with_args with no assert, those lines are ignored Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: A few tests use called_once_with_args instead of mock's assert_called_once_with_args without checking the result. That means that we're not asserting for that to happen. Those tests need to be fixed. [majopela@f20-devstack neutron]$ grep .called_once_with * -R | grep -v assert neutron/tests/unit/test_dhcp_agent.py: disable.called_once_with_args(network.id) neutron/tests/unit/test_dhcp_agent.py: uuid5.called_once_with(uuid.NAMESPACE_DNS, 'localhost') neutron/tests/unit/test_post_mortem_debug.py: mock_print_exception.called_once_with(*exc_info) neutron/tests/unit/test_db_migration.py: mock_open.write.called_once_with('a') neutron/tests/unit/test_agent_netns_cleanup.py: ovs_br_cls.called_once_with('br-int', conf.AGENT.root_helper) neutron/tests/unit/test_metadata_agent.py: self.eventlet.wsgi.server.called_once_with( To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1297875/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1295854] Re: NSX plugin does not support pagination
** Changed in: neutron Milestone: None = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1295854 Title: NSX plugin does not support pagination Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: This should be rectified. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1295854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1294526] Re: floatingip's id should be used instead of floatingip itself
** Changed in: neutron Importance: Medium = High ** Changed in: neutron Milestone: None = icehouse-rc2 ** Tags removed: icehouse-backport-potential ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Changed in: neutron Milestone: icehouse-rc2 = juno-1 ** Changed in: neutron/icehouse Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1294526 Title: floatingip's id should be used instead of floatingip itself Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L476 this id should be used as hash key instead of floating ip itself To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1294526/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1295448] Re: Big Switch Restproxy unit test unnecessarily duplicates tests
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1295448 Title: Big Switch Restproxy unit test unnecessarily duplicates tests Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: The VIF type tests currently have separate classes that all extend the ports test class. This means in addition to testing the VIF changing logic, it's unnecessarily exercising a lot of code that is not impacted by the VIF type. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1295448/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1302007] Re: l3 agent uses method concatenating two constants
** Changed in: neutron Milestone: None = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1302007 Title: l3 agent uses method concatenating two constants Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: In l3 agent is method ns_name() that concatenates constant NS_PREFIX with router.id that doesn't change during router's lifecycle. It's ineffective, namespace name can be an instance attribute. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1302007/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1301105] Re: Second firewall creation returns 500
** Changed in: neutron Milestone: None = juno-1 ** Changed in: neutron Importance: Low = Medium ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1301105 Title: Second firewall creation returns 500 Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: Second firewall creation returns 500. It is an expected behavior of firewall reference implementation and an internal server error should not be returned. It is some kind of quota error and 409 looks appropriate. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1301105/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1300628] Re: BigSwitch ML2 driver uses portbindingsports table
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Tags removed: icehouse-backport-potential ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1300628 Title: BigSwitch ML2 driver uses portbindingsports table Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: The Big Switch ML2 driver references the deprecated portbindings_db in the port location tracking code. This needs to be removed because it's resulting in the entire portbinding_db schema for one small function. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1300628/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1298459] Re: db migration: some tables are not created for bigswitch plugin and bigswitch mech driver
** Changed in: neutron Importance: Low = Medium ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1298459 Title: db migration: some tables are not created for bigswitch plugin and bigswitch mech driver Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: For bigswitch plugin, networkdhcpagentbindings and subnetroutes tables are not created by db migration. http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_PLUGIN/logs/screen/screen-q-svc.log.gz FOr bigswitch ML2 mech driver, neutron_ml2.consistencyhashes is not created by db migration. http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_ML2/logs/screen/screen-q-svc.log.gz To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1298459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1301093] Re: Fixing API command for Arista Mechanism Driver
** Changed in: neutron Milestone: None = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1301093 Title: Fixing API command for Arista Mechanism Driver Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: A minor change is made to the Arista API between ML2 Driver and the back-end. This bug addresses this change to align Icehouse release with Arista EOS releases. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1301093/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1300808] Re: Invalid version number '['3.13.0']' with Ubuntu trusty
** Changed in: neutron Milestone: None = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Tags removed: icehouse-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1300808 Title: Invalid version number '['3.13.0']' with Ubuntu trusty Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: Running latest devstack on Trusty Ubuntu, the Neutron agent fails to start with the following backtrace: 2014-04-01 13:49:54.227 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Running command: ['uname', '-r'] from (pid=14900) create_process /opt/stack/neutron/neutron/agent/linux/utils.py:48 2014-04-01 13:49:54.232 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Command: ['uname', '-r'] Exit code: 0 Stdout: '3.13.0-20-generic\n' Stderr: '' from (pid=14900) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-04-01 13:49:54.233 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Running command: ['sudo', '/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--version'] from (pid=14900) create_process /opt/stack/neutron/neutron/agent/linux/utils.py:48 2014-04-01 13:49:54.348 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Command: ['sudo', '/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--version'] Exit code: 0 Stdout: 'ovs-vsctl (Open vSwitch) 2.0.1\nCompiled Feb 23 2014 14:42:32\n' Stderr: '' from (pid=14900) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-04-01 13:49:54.349 DEBUG neutron.agent.linux.ovs_lib [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Checking OVS version for VXLAN support installed klm version is None, installed Linux version is ['3.13.0'], installed user version is 2.0 from (pid=14900) check_ovs_vxlan_version /opt/stack/neutron/neutron/agent/linux/ovs_lib.py:541 2014-04-01 13:49:54.350 CRITICAL neutron [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] invalid version number '['3.13.0']' 2014-04-01 13:49:54.350 TRACE neutron Traceback (most recent call last): 2014-04-01 13:49:54.350 TRACE neutron File /usr/local/bin/neutron-openvswitch-agent, line 10, in module 2014-04-01 13:49:54.350 TRACE neutron sys.exit(main()) 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 1360, in main 2014-04-01 13:49:54.350 TRACE neutron agent = OVSNeutronAgent(**agent_config) 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 214, in __init__ 2014-04-01 13:49:54.350 TRACE neutron self._check_ovs_version() 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 232, in _check_ovs_version 2014-04-01 13:49:54.350 TRACE neutron ovs_lib.check_ovs_vxlan_version(self.root_helper) 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 550, in check_ovs_vxlan_version 2014-04-01 13:49:54.350 TRACE neutron 'kernel', 'VXLAN') 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 507, in _compare_installed_and_required_version 2014-04-01 13:49:54.350 TRACE neutron installed_kernel_version) = dist_version.StrictVersion( 2014-04-01 13:49:54.350 TRACE neutron File /usr/lib/python2.7/distutils/version.py, line 40, in __init__ 2014-04-01 13:49:54.350 TRACE neutron self.parse(vstring) 2014-04-01 13:49:54.350 TRACE neutron File /usr/lib/python2.7/distutils/version.py, line 107, in parse 2014-04-01 13:49:54.350 TRACE neutron raise ValueError, invalid version number '%s' % vstring 2014-04-01 13:49:54.350 TRACE neutron ValueError: invalid version number '['3.13.0']' 2014-04-01 13:49:54.350 TRACE neutron q-agt failed to start This is due to bug #1291535 and this fix: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b2f65d9d447ddf2caf3b9c754bd00a5148bdf12c To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1300808/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1302304] Re: tox pep8 warns /bin/bash not installed in testenv
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Changed in: neutron/icehouse Importance: Undecided = High ** Tags removed: icehouse-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1302304 Title: tox pep8 warns /bin/bash not installed in testenv Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Bug description: $ tox -e pep8 pep8 develop-inst-nodeps: /home/henry/Dev/neutron pep8 runtests: commands[0] | flake8 pep8 runtests: commands[1] | neutron-db-manage check_migration pep8 runtests: commands[2] | bash -c find neutron -type f -regex '.*\.pot?' -print0|xargs -0 -n 1 msgfmt --check-format -o /dev/null WARNING:test command found but not installed in testenv cmd: /bin/bash env: /home/henry/Dev/neutron/.tox/pep8 Maybe forgot to specify a dependency? This is due to the tox.ini changes in https://review.openstack.org/84234 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1302304/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236372] Re: Router without active ports fails to be deleted, are reports wrong error message
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Changed in: neutron/icehouse Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron/icehouse Importance: Undecided = Low ** Changed in: neutron/icehouse Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1236372 Title: Router without active ports fails to be deleted, are reports wrong error message Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: Version === Havana, RHEL, neutron+ovs, python-neutron-2013.2-0.3.3.b3.el6ost Description === It's impossible to delete a router while it still has inactive ports. The error message states that the router still has active ports. # neutron router-port-list router1 +--+--+---+--+ | id | name | mac_address | fixed_ips | +--+--+---+--+ | 052a48e4-0868-4675-bef7-8f763dd697b4 | | fa:16:3e:60:73:8d | {subnet_id: c5d63940-71e8-4338-865e-f5364fbe4e78, ip_address: 10.35.214.1} | | 065cef02-a949-45c9-b3df-e005dbf96c9a | | fa:16:3e:a6:6d:d8 | {subnet_id: 044bcc05-f37b-4d1d-a700-c91c4381fbc8, ip_address: 10.35.211.1} | | 7a020243-90e5-439d-90fb-ec96b07843e7 | | fa:16:3e:04:0c:1f | {subnet_id: 4081fbca-3e59-4be5-a98e-3c9e0d13d3a6, ip_address: 10.35.212.1} | | 7af56958-674e-472b-8dbe-09b60501a6e6 | | fa:16:3e:1a:07:a4 | {subnet_id: ef8e7c03-f17f-4c3c-9afe-252aca1283fd, ip_address: 10.35.170.102} | | f034cb8a-2a09-4d41-b46c-a08fe208461e | | fa:16:3e:de:9d:32 | {subnet_id: cca4edc7-2872-4c1e-a270-3b0beb60f421, ip_address: 10.35.213.1} | +--+--+---+--+ # for i in `neutron router-port-list router1 | grep subnet_id | cut -d -f 2` ; do neutron port-show $i ; done | grep status | status| ACTIVE | | status| ACTIVE | | status| ACTIVE | | status| ACTIVE | | status| ACTIVE | # neutron router-delete router1 Router 727edf71-f637-402e-9fd9-767c372922ee still has active ports # for i in `neutron router-port-list router1 | grep subnet_id | cut -d -f 2` ; do neutron port-update $i --admin_state_up False ; done | grep status # for i in `neutron router-port-list router1 | grep subnet_id | cut -d -f 2` ; do neutron port-show $i ; done | grep status | status| DOWN | | status| DOWN | | status| DOWN | | status| DOWN | | status| DOWN | # neutron router-delete router1 Router 727edf71-f637-402e-9fd9-767c372922ee still has active ports From /var/log/neutron/server.log 2013-10-07 16:39:40.429 2341 ERROR neutron.api.v2.resource [-] delete failed 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource Traceback (most recent call last): 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource File /usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in resource 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource result = method(request=request, **args) 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource File /usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 432, in delete 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs) 2013-10
[Yahoo-eng-team] [Bug 1288582] Re: Rename DEVICE_OWNER_COMPUTE_PROBE to not start with compute
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron Milestone: icehouse-rc2 = juno-1 ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** Changed in: neutron/icehouse Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron/icehouse Importance: Undecided = Medium ** Changed in: neutron/icehouse Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1288582 Title: Rename DEVICE_OWNER_COMPUTE_PROBE to not start with compute Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: Neutron assumes that all ports with the device_owner that start with 'compute' are ports created by nova compute. Thus, when the debug agent creates a port with device_owner = compute:probe the nova callback feature tells nova when this port is wired even though nova does not know about (doesn't really matter we just log an error). This patch just renames this device_owner to not start with compute to avoid this. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1288582/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1245208] Re: LBaaS: unit tests for radware plugin driver should not employ multithreading
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Importance: Undecided = Medium ** Changed in: neutron/icehouse Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1245208 Title: LBaaS: unit tests for radware plugin driver should not employ multithreading Status in OpenStack Neutron (virtual network service): Fix Released Bug description: Radware plugin driver uses task queue to perform interaction with the backend device. Several operations such as lbaas objects deletion are performed in async manner. In the unit test code actual object deletion happens in separate thread; it leads to a need for tricks like putting test thread to sleep. Such unit tests are not reliable and could lead to failures that are hard to catch or debug. Unit test code should be refactored in such way that it uses single- threaded strategy to perform driver operations. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1245208/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1289139] Re: Configurable values is not printed in the OFA agent log file
** Changed in: neutron Milestone: icehouse-rc2 = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Status: New = In Progress ** Changed in: neutron/icehouse Importance: Undecided = Low ** Changed in: neutron/icehouse Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1289139 Title: Configurable values is not printed in the OFA agent log file Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: When the neutron-server starts up, it prints out the configurable values in the q-svc log. This values is useful in debugging problems. Such output is not in the OFA agent's log file. related bug 1285962 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1289139/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1291619] Re: Cisco VPN device drivers admin state not reported correctly
** Changed in: neutron Milestone: icehouse-rc2 = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Importance: Undecided = High ** Changed in: neutron/icehouse Status: New = In Progress ** Changed in: neutron/icehouse Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1291619 Title: Cisco VPN device drivers admin state not reported correctly Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: Currently, this driver supports update of the VPN service, which one can change the admin state to up or down. In addition, even though IPSec site-to-site connection update is not currently supported (one can do a delete/create), the user could create the connection with admin state down. When the service admin state is changed to down, the change does not happen in the device driver, and the status is not reported correctly. This is due to an issue with the plugin (bug 1291609 created). If later, another change occurs that causes a sync of the config, the connections on the VPN service will be deleted (the CSR REST API doesn't yet have support for admin down), but the status still will not be updated correctly. The configuration in OpenStack can get out of sync with the configuration on the CSR. If the IPSec site-to-site connection is created in admin down state, the underlying tunnel is not created (correct), but the status still shows PENDING_CREATE. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1291619/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1291915] Re: neutron-netns-cleanup script doesn't work in icehouse/havana, code is broken
** Changed in: neutron Milestone: icehouse-rc2 = juno-1 ** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Status: New = In Progress ** Changed in: neutron/icehouse Importance: Undecided = High ** Changed in: neutron/icehouse Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron/icehouse Milestone: None = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1291915 Title: neutron-netns-cleanup script doesn't work in icehouse/havana, code is broken Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: 1st) Some configuration options are not registered on the tool, but they're used in neutron.agent.linux.dhcp during execution $ neutron-netns-cleanup --debug --force --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini 2014-03-12 14:55:44.791 INFO neutron.common.config [-] Logging enabled! 2014-03-12 14:55:44.792 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'list'] from (pid=1785) create_process /opt/stack/neutron/neutron/agent/linux/utils.py:48 2014-03-12 14:55:45.001 DEBUG neutron.agent.linux.utils [-] Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'list'] Exit code: 0 Stdout: 'qdhcp-65cb66de-82d0-407c-aa23-2c544528f0d2\nqrouter-acc5f724-a169-4ffc-9e81-f00d43954509\nqrouter-5ed23337-9538-4994-823f-c64720506e54\n' Stderr: '' from (pid=1785) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-03-12 14:55:47.006 ERROR neutron.agent.linux.dhcp [-] Error importing interface driver 'neutron.agent.linux.interface.OVSInterfaceDriver': no such option: ovs_use_veth Error importing interface driver 'neutron.agent.linux.interface.OVSInterfaceDriver': no such option: ovs_use_veth 2nd) When we try to destroy a network, there's a dependency on the .namespace attribute of the network, that wasn't before. Stderr: '' from (pid=1969) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-03-12 15:08:53.048 ERROR neutron.agent.netns_cleanup_util [-] Error unable to destroy namespace: qdhcp-65cb66de-82d0-407c-aa23-2c544528f0d2 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util Traceback (most recent call last): 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 131, in destroy_namespace 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util kill_dhcp(conf, namespace) 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 86, in kill_dhcp 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util dhcp_driver.disable() 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/linux/dhcp.py, line 181, in disable 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util self.device_manager.destroy(self.network, self.interface_name) 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/linux/dhcp.py, line 814, in destroy 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util self.driver.unplug(device_name, namespace=network.namespace) 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util AttributeError: 'FakeNetwork' object has no attribute 'namespace' 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util 3rd) This error will happen because no plugin rpc connection is provided, and that's used in /opt/stack/neutron/neutron/agent/linux/dhcp.py as self.plugin.release_dhcp_port 2014-03-13 12:00:07.880 ERROR neutron.agent.netns_cleanup_util [-] Error unable to destroy namespace: qdhcp-388a37af-556d-4f4c-98b4-0ba41f944e32 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util Traceback (most recent call last): 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 132, in destroy_namespace 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util kill_dhcp(conf, namespace) 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 87, in kill_dhcp 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util dhcp_driver.disable() 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/linux/dhcp.py, line 181, in disable 2014-03-13 12:00:07.880 TRACE
[Yahoo-eng-team] [Bug 1301449] Re: ODL ML2 driver doesn't notify active/inactive ports
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1301449 Title: ODL ML2 driver doesn't notify active/inactive ports Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: The nova-event-callback blueprint [1] implemented the notifications of active/down ports to Nova. Before effectively starting an instance, Nova compute waits for a VIF plugged notification from Neutron. I'm running ODL ML2 driver in a devstack using the master branch and I notice that the ODL driver doesn't notify back the Nova API. Hence with the default settings, the VM creation always fails. As a workaround, set the following parameters in your nova.conf: vif_plugging_timeout = 10 vif_plugging_is_fatal = False With this configuration, I'm able to boot and connect to the instances but the Neutron ports are always reported as DOWN [2]. [1] https://blueprints.launchpad.net/neutron/+spec/nova-event-callback [2] http://paste.openstack.org/show/74861/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1301449/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1289139] Re: Configurable values is not printed in the OFA agent log file
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1289139 Title: Configurable values is not printed in the OFA agent log file Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: When the neutron-server starts up, it prints out the configurable values in the q-svc log. This values is useful in debugging problems. Such output is not in the OFA agent's log file. related bug 1285962 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1289139/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236372] Re: Router without active ports fails to be deleted, are reports wrong error message
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1236372 Title: Router without active ports fails to be deleted, are reports wrong error message Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: Version === Havana, RHEL, neutron+ovs, python-neutron-2013.2-0.3.3.b3.el6ost Description === It's impossible to delete a router while it still has inactive ports. The error message states that the router still has active ports. # neutron router-port-list router1 +--+--+---+--+ | id | name | mac_address | fixed_ips | +--+--+---+--+ | 052a48e4-0868-4675-bef7-8f763dd697b4 | | fa:16:3e:60:73:8d | {subnet_id: c5d63940-71e8-4338-865e-f5364fbe4e78, ip_address: 10.35.214.1} | | 065cef02-a949-45c9-b3df-e005dbf96c9a | | fa:16:3e:a6:6d:d8 | {subnet_id: 044bcc05-f37b-4d1d-a700-c91c4381fbc8, ip_address: 10.35.211.1} | | 7a020243-90e5-439d-90fb-ec96b07843e7 | | fa:16:3e:04:0c:1f | {subnet_id: 4081fbca-3e59-4be5-a98e-3c9e0d13d3a6, ip_address: 10.35.212.1} | | 7af56958-674e-472b-8dbe-09b60501a6e6 | | fa:16:3e:1a:07:a4 | {subnet_id: ef8e7c03-f17f-4c3c-9afe-252aca1283fd, ip_address: 10.35.170.102} | | f034cb8a-2a09-4d41-b46c-a08fe208461e | | fa:16:3e:de:9d:32 | {subnet_id: cca4edc7-2872-4c1e-a270-3b0beb60f421, ip_address: 10.35.213.1} | +--+--+---+--+ # for i in `neutron router-port-list router1 | grep subnet_id | cut -d -f 2` ; do neutron port-show $i ; done | grep status | status| ACTIVE | | status| ACTIVE | | status| ACTIVE | | status| ACTIVE | | status| ACTIVE | # neutron router-delete router1 Router 727edf71-f637-402e-9fd9-767c372922ee still has active ports # for i in `neutron router-port-list router1 | grep subnet_id | cut -d -f 2` ; do neutron port-update $i --admin_state_up False ; done | grep status # for i in `neutron router-port-list router1 | grep subnet_id | cut -d -f 2` ; do neutron port-show $i ; done | grep status | status| DOWN | | status| DOWN | | status| DOWN | | status| DOWN | | status| DOWN | # neutron router-delete router1 Router 727edf71-f637-402e-9fd9-767c372922ee still has active ports From /var/log/neutron/server.log 2013-10-07 16:39:40.429 2341 ERROR neutron.api.v2.resource [-] delete failed 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource Traceback (most recent call last): 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource File /usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in resource 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource result = method(request=request, **args) 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource File /usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 432, in delete 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs) 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource File /usr/lib/python2.6/site-packages/neutron/db/l3_db.py, line 266, in delete_router 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource raise l3.RouterInUse(router_id=id) 2013-10-07 16:39:40.429 2341 TRACE neutron.api.v2.resource
[Yahoo-eng-team] [Bug 1301105] Re: Second firewall creation returns 500
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1301105 Title: Second firewall creation returns 500 Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: Second firewall creation returns 500. It is an expected behavior of firewall reference implementation and an internal server error should not be returned. It is some kind of quota error and 409 looks appropriate. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1301105/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1301093] Re: Fixing API command for Arista Mechanism Driver
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1301093 Title: Fixing API command for Arista Mechanism Driver Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: A minor change is made to the Arista API between ML2 Driver and the back-end. This bug addresses this change to align Icehouse release with Arista EOS releases. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1301093/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1288582] Re: Rename DEVICE_OWNER_COMPUTE_PROBE to not start with compute
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1288582 Title: Rename DEVICE_OWNER_COMPUTE_PROBE to not start with compute Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: Neutron assumes that all ports with the device_owner that start with 'compute' are ports created by nova compute. Thus, when the debug agent creates a port with device_owner = compute:probe the nova callback feature tells nova when this port is wired even though nova does not know about (doesn't really matter we just log an error). This patch just renames this device_owner to not start with compute to avoid this. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1288582/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1291619] Re: Cisco VPN device drivers admin state not reported correctly
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1291619 Title: Cisco VPN device drivers admin state not reported correctly Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: Currently, this driver supports update of the VPN service, which one can change the admin state to up or down. In addition, even though IPSec site-to-site connection update is not currently supported (one can do a delete/create), the user could create the connection with admin state down. When the service admin state is changed to down, the change does not happen in the device driver, and the status is not reported correctly. This is due to an issue with the plugin (bug 1291609 created). If later, another change occurs that causes a sync of the config, the connections on the VPN service will be deleted (the CSR REST API doesn't yet have support for admin down), but the status still will not be updated correctly. The configuration in OpenStack can get out of sync with the configuration on the CSR. If the IPSec site-to-site connection is created in admin down state, the underlying tunnel is not created (correct), but the status still shows PENDING_CREATE. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1291619/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1291915] Re: neutron-netns-cleanup script doesn't work in icehouse/havana, code is broken
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1291915 Title: neutron-netns-cleanup script doesn't work in icehouse/havana, code is broken Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: 1st) Some configuration options are not registered on the tool, but they're used in neutron.agent.linux.dhcp during execution $ neutron-netns-cleanup --debug --force --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini 2014-03-12 14:55:44.791 INFO neutron.common.config [-] Logging enabled! 2014-03-12 14:55:44.792 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'list'] from (pid=1785) create_process /opt/stack/neutron/neutron/agent/linux/utils.py:48 2014-03-12 14:55:45.001 DEBUG neutron.agent.linux.utils [-] Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'list'] Exit code: 0 Stdout: 'qdhcp-65cb66de-82d0-407c-aa23-2c544528f0d2\nqrouter-acc5f724-a169-4ffc-9e81-f00d43954509\nqrouter-5ed23337-9538-4994-823f-c64720506e54\n' Stderr: '' from (pid=1785) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-03-12 14:55:47.006 ERROR neutron.agent.linux.dhcp [-] Error importing interface driver 'neutron.agent.linux.interface.OVSInterfaceDriver': no such option: ovs_use_veth Error importing interface driver 'neutron.agent.linux.interface.OVSInterfaceDriver': no such option: ovs_use_veth 2nd) When we try to destroy a network, there's a dependency on the .namespace attribute of the network, that wasn't before. Stderr: '' from (pid=1969) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-03-12 15:08:53.048 ERROR neutron.agent.netns_cleanup_util [-] Error unable to destroy namespace: qdhcp-65cb66de-82d0-407c-aa23-2c544528f0d2 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util Traceback (most recent call last): 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 131, in destroy_namespace 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util kill_dhcp(conf, namespace) 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 86, in kill_dhcp 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util dhcp_driver.disable() 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/linux/dhcp.py, line 181, in disable 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util self.device_manager.destroy(self.network, self.interface_name) 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/linux/dhcp.py, line 814, in destroy 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util self.driver.unplug(device_name, namespace=network.namespace) 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util AttributeError: 'FakeNetwork' object has no attribute 'namespace' 2014-03-12 15:08:53.048 TRACE neutron.agent.netns_cleanup_util 3rd) This error will happen because no plugin rpc connection is provided, and that's used in /opt/stack/neutron/neutron/agent/linux/dhcp.py as self.plugin.release_dhcp_port 2014-03-13 12:00:07.880 ERROR neutron.agent.netns_cleanup_util [-] Error unable to destroy namespace: qdhcp-388a37af-556d-4f4c-98b4-0ba41f944e32 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util Traceback (most recent call last): 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 132, in destroy_namespace 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util kill_dhcp(conf, namespace) 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/netns_cleanup_util.py, line 87, in kill_dhcp 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util dhcp_driver.disable() 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/linux/dhcp.py, line 181, in disable 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util self.device_manager.destroy(self.network, self.interface_name) 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util File /opt/stack/neutron/neutron/agent/linux/dhcp.py, line 816, in destroy 2014-03-13 12:00:07.880 TRACE neutron.agent.netns_cleanup_util self.plugin.release_dhcp_port(network.id, 2014-03-13 12:00:07.880
[Yahoo-eng-team] [Bug 1300628] Re: BigSwitch ML2 driver uses portbindingsports table
** No longer affects: neutron/icehouse ** Changed in: neutron Milestone: juno-1 = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1300628 Title: BigSwitch ML2 driver uses portbindingsports table Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: The Big Switch ML2 driver references the deprecated portbindings_db in the port location tracking code. This needs to be removed because it's resulting in the entire portbinding_db schema for one small function. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1300628/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1300808] Re: Invalid version number '['3.13.0']' with Ubuntu trusty
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1300808 Title: Invalid version number '['3.13.0']' with Ubuntu trusty Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: Running latest devstack on Trusty Ubuntu, the Neutron agent fails to start with the following backtrace: 2014-04-01 13:49:54.227 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Running command: ['uname', '-r'] from (pid=14900) create_process /opt/stack/neutron/neutron/agent/linux/utils.py:48 2014-04-01 13:49:54.232 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Command: ['uname', '-r'] Exit code: 0 Stdout: '3.13.0-20-generic\n' Stderr: '' from (pid=14900) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-04-01 13:49:54.233 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Running command: ['sudo', '/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--version'] from (pid=14900) create_process /opt/stack/neutron/neutron/agent/linux/utils.py:48 2014-04-01 13:49:54.348 DEBUG neutron.agent.linux.utils [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Command: ['sudo', '/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--version'] Exit code: 0 Stdout: 'ovs-vsctl (Open vSwitch) 2.0.1\nCompiled Feb 23 2014 14:42:32\n' Stderr: '' from (pid=14900) execute /opt/stack/neutron/neutron/agent/linux/utils.py:74 2014-04-01 13:49:54.349 DEBUG neutron.agent.linux.ovs_lib [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] Checking OVS version for VXLAN support installed klm version is None, installed Linux version is ['3.13.0'], installed user version is 2.0 from (pid=14900) check_ovs_vxlan_version /opt/stack/neutron/neutron/agent/linux/ovs_lib.py:541 2014-04-01 13:49:54.350 CRITICAL neutron [req-9f04e437-c667-40c2-8be4-aef96f4810de None None] invalid version number '['3.13.0']' 2014-04-01 13:49:54.350 TRACE neutron Traceback (most recent call last): 2014-04-01 13:49:54.350 TRACE neutron File /usr/local/bin/neutron-openvswitch-agent, line 10, in module 2014-04-01 13:49:54.350 TRACE neutron sys.exit(main()) 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 1360, in main 2014-04-01 13:49:54.350 TRACE neutron agent = OVSNeutronAgent(**agent_config) 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 214, in __init__ 2014-04-01 13:49:54.350 TRACE neutron self._check_ovs_version() 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 232, in _check_ovs_version 2014-04-01 13:49:54.350 TRACE neutron ovs_lib.check_ovs_vxlan_version(self.root_helper) 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 550, in check_ovs_vxlan_version 2014-04-01 13:49:54.350 TRACE neutron 'kernel', 'VXLAN') 2014-04-01 13:49:54.350 TRACE neutron File /opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 507, in _compare_installed_and_required_version 2014-04-01 13:49:54.350 TRACE neutron installed_kernel_version) = dist_version.StrictVersion( 2014-04-01 13:49:54.350 TRACE neutron File /usr/lib/python2.7/distutils/version.py, line 40, in __init__ 2014-04-01 13:49:54.350 TRACE neutron self.parse(vstring) 2014-04-01 13:49:54.350 TRACE neutron File /usr/lib/python2.7/distutils/version.py, line 107, in parse 2014-04-01 13:49:54.350 TRACE neutron raise ValueError, invalid version number '%s' % vstring 2014-04-01 13:49:54.350 TRACE neutron ValueError: invalid version number '['3.13.0']' 2014-04-01 13:49:54.350 TRACE neutron q-agt failed to start This is due to bug #1291535 and this fix: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b2f65d9d447ddf2caf3b9c754bd00a5148bdf12c To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1300808/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1298459] Re: db migration: some tables are not created for bigswitch plugin and bigswitch mech driver
** No longer affects: neutron/icehouse ** Changed in: neutron Milestone: juno-1 = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1298459 Title: db migration: some tables are not created for bigswitch plugin and bigswitch mech driver Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: For bigswitch plugin, networkdhcpagentbindings and subnetroutes tables are not created by db migration. http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_PLUGIN/logs/screen/screen-q-svc.log.gz FOr bigswitch ML2 mech driver, neutron_ml2.consistencyhashes is not created by db migration. http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_ML2/logs/screen/screen-q-svc.log.gz To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1298459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1299145] Re: Documentation Bug, BigSwitch should be Big Switch
** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1299145 Title: Documentation Bug, BigSwitch should be Big Switch Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: BigSwitch references should be changed to Big Switch. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1299145/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297875] Re: some tests call called_once_with_args with no assert, those lines are ignored
** No longer affects: neutron/icehouse ** Changed in: neutron Milestone: juno-1 = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1297875 Title: some tests call called_once_with_args with no assert, those lines are ignored Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: A few tests use called_once_with_args instead of mock's assert_called_once_with_args without checking the result. That means that we're not asserting for that to happen. Those tests need to be fixed. [majopela@f20-devstack neutron]$ grep .called_once_with * -R | grep -v assert neutron/tests/unit/test_dhcp_agent.py: disable.called_once_with_args(network.id) neutron/tests/unit/test_dhcp_agent.py: uuid5.called_once_with(uuid.NAMESPACE_DNS, 'localhost') neutron/tests/unit/test_post_mortem_debug.py: mock_print_exception.called_once_with(*exc_info) neutron/tests/unit/test_db_migration.py: mock_open.write.called_once_with('a') neutron/tests/unit/test_agent_netns_cleanup.py: ovs_br_cls.called_once_with('br-int', conf.AGENT.root_helper) neutron/tests/unit/test_metadata_agent.py: self.eventlet.wsgi.server.called_once_with( To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1297875/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1295854] Re: NSX plugin does not support pagination
** No longer affects: neutron/icehouse ** Changed in: neutron Milestone: juno-1 = icehouse-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1295854 Title: NSX plugin does not support pagination Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: This should be rectified. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1295854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1295448] Re: Big Switch Restproxy unit test unnecessarily duplicates tests
** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1295448 Title: Big Switch Restproxy unit test unnecessarily duplicates tests Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: The VIF type tests currently have separate classes that all extend the ports test class. This means in addition to testing the VIF changing logic, it's unnecessarily exercising a lot of code that is not impacted by the VIF type. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1295448/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1302304] Re: tox pep8 warns /bin/bash not installed in testenv
** No longer affects: neutron/icehouse ** Tags added: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1302304 Title: tox pep8 warns /bin/bash not installed in testenv Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: $ tox -e pep8 pep8 develop-inst-nodeps: /home/henry/Dev/neutron pep8 runtests: commands[0] | flake8 pep8 runtests: commands[1] | neutron-db-manage check_migration pep8 runtests: commands[2] | bash -c find neutron -type f -regex '.*\.pot?' -print0|xargs -0 -n 1 msgfmt --check-format -o /dev/null WARNING:test command found but not installed in testenv cmd: /bin/bash env: /home/henry/Dev/neutron/.tox/pep8 Maybe forgot to specify a dependency? This is due to the tox.ini changes in https://review.openstack.org/84234 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1302304/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1294526] Re: floatingip's id should be used instead of floatingip itself
** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1294526 Title: floatingip's id should be used instead of floatingip itself Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L476 this id should be used as hash key instead of floating ip itself To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1294526/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1302007] Re: l3 agent uses method concatenating two constants
** Changed in: neutron Milestone: juno-1 = icehouse-rc2 ** No longer affects: neutron/icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1302007 Title: l3 agent uses method concatenating two constants Status in OpenStack Neutron (virtual network service): Fix Committed Bug description: In l3 agent is method ns_name() that concatenates constant NS_PREFIX with router.id that doesn't change during router's lifecycle. It's ineffective, namespace name can be an instance attribute. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1302007/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297607] Re: Key length issue with UTF-8 database
** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1297607 Title: Key length issue with UTF-8 database Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: On Icehouse-3, the neutron-server daemon generates the following error on startup when configured to use a UTF-8 database: 2014-03-25 21:23:17.942 26195 INFO neutron.db.api [-] Database registration exception: (OperationalError) (1071, 'Specified key was too long; max key length is 1000 bytes') '\nCREATE TABLE agents (\n\tid VARCHAR(36) NOT NULL, \n\tagent_type VARCHAR(255) NOT NULL, \n\t`binary` VARCHAR(255) NOT NULL, \n\ttopic VARCHAR(255) NOT NULL, \n\thost VARCHAR(255) NOT NULL, \n\tadmin_state_up BOOL NOT NULL, \n\tcreated_at DATETIME NOT NULL, \n\tstarted_at DATETIME NOT NULL, \n\theartbeat_timestamp DATETIME NOT NULL, \n\tdescription VARCHAR(255), \n\tconfigurations VARCHAR(4095) NOT NULL, \n\tPRIMARY KEY (id), \n\tCONSTRAINT uniq_agents0agent_type0host UNIQUE (agent_type, host), \n\tCHECK (admin_state_up IN (0, 1))\n)\n\n' () To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1297607/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1295806] Re: XML responses still show quantum
This is correct because we cannot change these values until the v2 XML API is retired. ** Changed in: neutron Status: New = Incomplete ** Changed in: neutron Status: Incomplete = Invalid ** Changed in: neutron Assignee: shihanzhang (shihanzhang) = (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1295806 Title: XML responses still show quantum Status in OpenStack Neutron (virtual network service): Invalid Bug description: When I run neutron v2.0 API, quantum still shows up in the XML responses. For example - for the call curl -i 'http://166.78.46.130:9696/v2.0/networks/af374017-c9ae- 4a1d-b799-ab73111476e2.xml' -X PUT -H X-Auth-Project-Id: admin -H Accept: application/xml -H X-Auth-Token: $token -T network- update.xml This response is returned, with a quantum namespace. Is that correct? ?xml version='1.0' encoding='UTF-8'? network xmlns=http://openstack.org/quantum/api/v2.0; xmlns:provider=http://docs.openstack.org/ext/provider/api/v1.0; xmlns:quantum=http://openstack.org/quantum/api/v2.0; xmlns:router=http://docs.openstack.org/ext/neutron/router/api/v1.0; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; statusACTIVE/status subnets quantum:type=list/ namesample-network-4-updated/name provider:physical_network xsi:nil=true/ admin_state_up quantum:type=boolTrue/admin_state_up tenant_id4fd44f30292945e481c7b8a0c8908869/tenant_id provider:network_typelocal/provider:network_type router:external quantum:type=boolFalse/router:external shared quantum:type=boolFalse/shared idaf374017-c9ae-4a1d-b799-ab73111476e2/id provider:segmentation_id xsi:nil=true/ /network To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1295806/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1291694] Re: test_add_list_remove_router_on_l3_agent is unstable
** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1291694 Title: test_add_list_remove_router_on_l3_agent is unstable Status in Tempest: Fix Released Bug description: https://review.openstack.org/#/c/68626/7/tempest/api/network/admin/test_l3_agent_scheduler.py added test_add_list_remove_router_on_l3_agent which is failing in check and gate queues. message:FAIL\: tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestXML.test_add_list_remove_router_on_l3_agent AND filename:console.html At the time of filing this bug there are 76 failures in 2 days To manage notifications about this bug go to: https://bugs.launchpad.net/tempest/+bug/1291694/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1299049] [NEW] reorganize unit tests into directories instead of mangling names
Public bug reported: The Unit Test directory needs reorganization instead of containing most test modules in root directory. ** Affects: neutron Importance: Wishlist Assignee: Mark McClain (markmcclain) Status: In Progress ** Tags: neutron-core -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1299049 Title: reorganize unit tests into directories instead of mangling names Status in OpenStack Neutron (virtual network service): In Progress Bug description: The Unit Test directory needs reorganization instead of containing most test modules in root directory. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1299049/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1299046] [NEW] Remove Quantum compatibility layer
Public bug reported: The deprecation period is over, so now is the time to remove the Quantum compatibility layer. ** Affects: neutron Importance: Low Assignee: Mark McClain (markmcclain) Status: In Progress ** Tags: neutron-core ** Changed in: neutron Importance: Undecided = Low ** Changed in: neutron Milestone: None = juno-1 ** Changed in: neutron Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron Status: New = Triaged ** Tags added: neutron-core -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1299046 Title: Remove Quantum compatibility layer Status in OpenStack Neutron (virtual network service): In Progress Bug description: The deprecation period is over, so now is the time to remove the Quantum compatibility layer. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1299046/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1281453] Re: Replace exception re-raises with excutils.save_and_reraise_exception()
This has been covered by other work in Neutron. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1281453 Title: Replace exception re-raises with excutils.save_and_reraise_exception() Status in OpenStack Image Registry and Delivery Service (Glance): New Status in OpenStack Neutron (virtual network service): Invalid Bug description: There are quite a few places in the Glance code where exceptions are re-raised: try: some_operation() except FooException as e: do_something1() raise except BarException as e: do_something2() raise These places should use the excutils.save_and_reraise_exception class because in some cases the exception context can be cleared, resulting in None being attempted to be re-raised after an exception handler is run (see excutils.save_and_reraise_exception for more). try: some_operation() except FooException as e: with excutils.save_and_reraise_exception() as ctxt: do_something1() except BarException as e: with excutils.save_and_reraise_exception() as ctxt: do_something2() To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1281453/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1230407] Re: VMs can't progress through state changes because Neutron is deadlocking on it's database queries, and thus leaving networks in inconsistent states
Closing this bug as it is non-specific. Instead we should open bugs for specific instances of this error. ** Changed in: neutron Milestone: icehouse-rc1 = None ** Changed in: neutron Status: Confirmed = Invalid ** Changed in: neutron Assignee: Mark McClain (markmcclain) = (unassigned) ** Changed in: neutron Importance: Critical = Undecided -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1230407 Title: VMs can't progress through state changes because Neutron is deadlocking on it's database queries, and thus leaving networks in inconsistent states Status in OpenStack Neutron (virtual network service): Invalid Bug description: This is most often seen with the State change timeout exceeded in the tempest logs. 2013-09-25 16:03:28.319 | FAIL: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance_with_tags[gate,smoke] 2013-09-25 16:03:28.319 | tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance_with_tags[gate,smoke] 2013-09-25 16:03:28.319 | -- 2013-09-25 16:03:28.319 | _StringException: Empty attachments: 2013-09-25 16:03:28.319 | stderr 2013-09-25 16:03:28.320 | stdout 2013-09-25 16:03:28.320 | 2013-09-25 16:03:28.320 | pythonlogging:'': {{{2013-09-25 15:49:34,792 state: pending}}} 2013-09-25 16:03:28.320 | 2013-09-25 16:03:28.320 | Traceback (most recent call last): 2013-09-25 16:03:28.320 | File tempest/thirdparty/boto/test_ec2_instance_run.py, line 175, in test_run_stop_terminate_instance_with_tags 2013-09-25 16:03:28.320 | self.assertInstanceStateWait(instance, running) 2013-09-25 16:03:28.321 | File tempest/thirdparty/boto/test.py, line 356, in assertInstanceStateWait 2013-09-25 16:03:28.321 | state = self.waitInstanceState(lfunction, wait_for) 2013-09-25 16:03:28.321 | File tempest/thirdparty/boto/test.py, line 341, in waitInstanceState 2013-09-25 16:03:28.321 | self.valid_instance_state) 2013-09-25 16:03:28.321 | File tempest/thirdparty/boto/test.py, line 331, in state_wait_gone 2013-09-25 16:03:28.321 | state = state_wait(lfunction, final_set, valid_set) 2013-09-25 16:03:28.322 | File tempest/thirdparty/boto/utils/wait.py, line 57, in state_wait 2013-09-25 16:03:28.322 | (dtime, final_set, status)) 2013-09-25 16:03:28.322 | AssertionError: State change timeout exceeded!(400s) While waitingfor set(['running', '_GONE']) at pending full log: http://logs.openstack.org/38/47438/1/gate/gate-tempest- devstack-vm-neutron/93db162/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1230407/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1265495] Re: Error reading SSH protocol banner
Closing this bug as we have not seen it in the gate in nearly a week. This bug is overly general and we should this occur again we should open more specific bugs instead. ** Changed in: neutron Status: Confirmed = Fix Committed ** No longer affects: devstack -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1265495 Title: Error reading SSH protocol banner Status in OpenStack Neutron (virtual network service): Fix Committed Status in Tempest: Confirmed Bug description: This appears similar to bug 1210664 which is now marked as released. Once parallel testing is enabled (running with the patches for blueprint neutron-parallel-testing), this error appears frequently. One example here: http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/40bee04/console.html.gz Please note that the manifestation of this error is similar to bug 1253896, but the error is different, and possibly the root cause as well, as it seems the connection on port 22 is established successfully. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1265495/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1283599] Re: TestNetworkBasicOps occasionally fails to delete resources
** Also affects: tempest Importance: Undecided Status: New ** Changed in: neutron Milestone: icehouse-rc1 = None -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1283599 Title: TestNetworkBasicOps occasionally fails to delete resources Status in OpenStack Neutron (virtual network service): New Status in Tempest: New Bug description: Network, Subnet and security group appear to be in use when they are deleted. Observed in: http://logs.openstack.org/84/75284/3/check/check-tempest-dsvm-neutron-full/d792a7a/logs Observed so far with neutron full job only. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1283599/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1295281] [NEW] py26 tests take too long to run
Public bug reported: Py26 Unit Test can take over an hour to execute on 4 core machines. They should be refactored to avoid duplicate tests. ** Affects: neutron Importance: Medium Assignee: Mark McClain (markmcclain) Status: Triaged ** Changed in: neutron Importance: Critical = Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1295281 Title: py26 tests take too long to run Status in OpenStack Neutron (virtual network service): Triaged Bug description: Py26 Unit Test can take over an hour to execute on 4 core machines. They should be refactored to avoid duplicate tests. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1295281/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1118388] Re: Ultra outdated docstring for NvpPlugin
This was fixed alongside of other refactoring. ** Changed in: neutron Status: In Progress = Invalid ** Changed in: neutron Milestone: icehouse-rc1 = None ** Changed in: neutron Assignee: Sachin Thakkar (sthakkar) = (unassigned) ** Changed in: neutron Importance: Low = Undecided -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1118388 Title: Ultra outdated docstring for NvpPlugin Status in OpenStack Neutron (virtual network service): Invalid Bug description: This sounds so 'Essex'. NvpPluginV2 is a Quantum plugin that provides L2 Virtual Network functionality using NVP. Docstring should be updated reflecting all the features supported by the plugin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1118388/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1251448] Re: BadRequest: Multiple possible networks found, use a Network ID to be more specific.
** Changed in: neutron Milestone: icehouse-rc1 = None ** Also affects: tempest Importance: Undecided Status: New ** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1251448 Title: BadRequest: Multiple possible networks found, use a Network ID to be more specific. Status in Tempest: New Bug description: Gate (only neutron based) is peridocally failing with the following error: BadRequest: Multiple possible networks found, use a Network ID to be more specific. http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiIHBvc3NpYmxlIG5ldHdvcmtzIGZvdW5kLCB1c2UgYSBOZXR3b3JrIElEIHRvIGJlIG1vcmUgc3BlY2lmaWMuIChIVFRQIDQwMClcIiBBTkQgZmlsZW5hbWU6XCJjb25zb2xlLmh0bWxcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg0NDY2ODA0Mjg2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9 query: message: possible networks found, use a Network ID to be more specific. (HTTP 400) AND filename:console.html Example: http://logs.openstack.org/75/54275/3/check/check-tempest- devstack-vm-neutron-pg/61a2974/console.html Failure breakdown by job: check-tempest-devstack-vm-neutron-pg 34% check-tempest-devstack-vm-neutron24% gate-tempest-devstack-vm-neutron 10% gate-tempest-devstack-vm-neutron-pg 5% To manage notifications about this bug go to: https://bugs.launchpad.net/tempest/+bug/1251448/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1228313] Re: Multiple tap interfaces on controller have overlapping tags
** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1228313 Title: Multiple tap interfaces on controller have overlapping tags Status in OpenStack Neutron (virtual network service): Invalid Bug description: Host: Ubuntu Release: Grizzly w/ Quantum Multiple problems reported, including instances no longer receiving IPs via DHCP. My troubleshooting usually involves confirming connectivity from the namespaces and checking OVS, so I logged into the controller to find the following tap interfaces had overlapping tags: Port tap78c4dd08-ad tag: 1 Interface tap78c4dd08-ad type: internal Port tapa827f51e-be tag: 1 Interface tapa827f51e-be type: internal Port tap5ec14dfb-56 tag: 1 Interface tap5ec14dfb-56 type: internal There were approximately 8 provider networks configured, and these taps corresponded to 3 different namespaces on the controller. The other taps had unique tags (as expected). Pinging from each namespace revealed only one of the three namespaces to be working properly. Restarting the 'openvswitch-switch' service renumbered the tags and restored connectivity from all namespaces. Looking back I should have checked the ovs flows to see what the rules looked like, but I was in a hurry to get things working. The user of the system is in the process of testing their environment, which includes constanting creating networks/subnets/instances, removing them, and recreating them via API. I don't have any additional information to provide, but am curious to know how we might be able to recreate a condition that would cause overlapping tags such as this. Plan to check the flows the next time this happens to confirm the theory of duplicate/overlapping rules w/ incorrect vlan rewrites. Thanks- JD To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1228313/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1233293] Re: tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops
This has been addressed via other work. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1233293 Title: tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops Status in OpenStack Neutron (virtual network service): Invalid Bug description: 2013-09-30 17:10:30.041 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops Traceback (most recent call last): 2013-09-30 17:10:30.041 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops File tempest/scenario/test_network_basic_ops.py, line 254, in _check_public_network_connectivity 2013-09-30 17:10:30.041 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops private_key) 2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops File tempest/scenario/manager.py, line 622, in _check_vm_connectivity 2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops reachable % ip_address) 2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops File /usr/lib/python2.7/unittest/case.py, line 420, in assertTrue 2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops raise self.failureException(msg) 2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops AssertionError: Timed out waiting for 172.24.4.232 to become reachable 2013-09-30 17:10:30.043 | 2013-09-30 17:08:12.589 19033 TRACE tempest.scenario.test_network_basic_ops 2013-09-30 17:10:30.044 | 2013-09-30 17:08:12,677 Host Addr: 2013-09-30 17:10:30.045 | sudo: no tty present and no askpass program specified 2013-09-30 17:10:30.045 | Sorry, try again. 2013-09-30 17:10:30.047 | sudo: no tty present and no askpass program specified 2013-09-30 17:10:30.047 | Sorry, try again. 2013-09-30 17:10:30.047 | sudo: no tty present and no askpass program specified 2013-09-30 17:10:30.051 | Sorry, try again. 2013-09-30 17:10:30.051 | sudo: 3 incorrect password attempts ... 2013-09-30 17:10:30.595 | }}} 2013-09-30 17:10:30.595 | 2013-09-30 17:10:30.595 | Traceback (most recent call last): 2013-09-30 17:10:30.595 | File tempest/scenario/test_network_basic_ops.py, line 269, in test_network_basic_ops 2013-09-30 17:10:30.596 | self._check_public_network_connectivity() 2013-09-30 17:10:30.596 | File tempest/scenario/test_network_basic_ops.py, line 258, in _check_public_network_connectivity 2013-09-30 17:10:30.596 | raise exc 2013-09-30 17:10:30.596 | AssertionError: Timed out waiting for 172.24.4.232 to become reachable 2013-09-30 17:10:30.596 | 2013-09-30 17:10:30.597 | 2013-09-30 17:10:30.597 | == 2013-09-30 17:10:30.598 | FAIL: process-returncode 2013-09-30 17:10:30.598 | process-returncode To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1233293/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1237711] Re: Creating instance on network with no subnet: no error message
** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1237711 Title: Creating instance on network with no subnet: no error message Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Compute (Nova): Triaged Bug description: When trying to launch an instance on a network without any subnet the creation fails. No error message is provided even though it is clear the issue is due to the lack of a subnet. No entry visible in the log for that instance. nova scheduler log: -- l2013-10-09 15:14:35.249 INFO nova.scheduler.filter_scheduler [req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Attempting to build 1 instance(s) uuids: [u'0d2a3866-23b0-4f85-9689-f4b37877e950'] 2013-10-09 15:14:35.279 INFO nova.scheduler.filter_scheduler [req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Choosing host WeighedHost [host: kraken-vc1-ubuntu1, weight: 252733.0] for instance 0d2a3866-23b0-4f85-9689-f4b37877e950 2013-10-09 15:14:38.028 INFO nova.scheduler.filter_scheduler [req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Attempting to build 1 instance(s) uuids: [u'0d2a3866-23b0-4f85-9689-f4b37877e950'] 2013-10-09 15:14:38.030 ERROR nova.scheduler.filter_scheduler [req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] [instance: 0d2a3866-23b0-4f85-9689-f4b37877e950] Error from last host: kraken-vc1-ubuntu1 (node domain-c21(kraken-vc1)): [u'Traceback (most recent call last):\n', u' File /opt/stack/nova/nova/compute/manager.py, line 1039, in _build_instance\n set_access_ip=set_access_ip)\n', u' File /opt/stack/nova/nova/compute/manager.py, line 1412, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File /opt/stack/nova/nova/compute/manager.py, line 1409, in _spawn\n block_device_info)\n', u' File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 623, in spawn\n admin_password, network_info, block_device_info)\n', u' File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 243, in spawn\n vif_infos = _get_vif_infos()\n', u' File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 227, in _get_vif_infos\n for vif in network_info:\n', u' File /opt/stack/nova/nova/network/model.py, line 375, in __iter__\nreturn self._sync_wrapper(fn, *args, **kwargs)\n', u' File /opt/stack/nova/nova/network/model.py, line 366, in _sync_wrapper\n self.wait()\n', u' File /opt/stack/nova/nova/network/model.py, line 398, in wait\nself[:] = self._gt.wait()\n', u' File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait\nreturn self._exit_event.wait()\n', u' File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 120, in wait\n current.throw(*self._exc)\n', u' File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main\nresult = function(*args, **kwargs)\n', u' File /opt/stack/nova/nova/compute/manager.py, line 1230, in _allocate_network_async\ndhcp_options=dhcp_options)\n', u' File /opt/stack/nova/nova/network/api.py, line 49, in wrapper\nres = f(self, context, *args, **kwargs)\n', u' File /o pt/stack/nova/nova/network/neutronv2/api.py, line 315, in allocate_for_instance\nraise exception.SecurityGroupCannotBeApplied()\n', u'SecurityGroupCannotBeApplied: Network requires port_security_enabled and subnet associated in order to apply security groups.\n'] 2013-10-09 15:14:38.055 WARNING nova.scheduler.driver [req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] [instance: 0d2a3866-23b0-4f85-9689-f4b37877e950] Setting instance to ERROR state To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1237711/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1210150] Re: linuxbridge unit tests are unstable when are run alone
Works ok for me. I'm going to close since this section of code is scheduled for removal in during Juno. ** Changed in: neutron Status: New = Incomplete ** Changed in: neutron Status: Incomplete = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1210150 Title: linuxbridge unit tests are unstable when are run alone Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: When linuxbridge tests are run alone (either with 'tox -epy27 linuxbridge' or '.venv/bin/python run_tests.py linuxbridge') they fail with plenty of different errors from time to time. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1210150/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1157771] Re: Use auto-deleted queues prevent rpc flood
** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1157771 Title: Use auto-deleted queues prevent rpc flood Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: Currently we use 'call' instead of 'cast' for prevent rpc flood. But 'call' doesn't scale(https://docs.google.com/file/d/0B- droFdkDaVhVzhsN3RKRlFLODQ/edit). The flood reason was all the agents send message to the queue, but without quantum server, all the message will buffered in the queue. After quantum server startup, all the messages that buffered in queue will flood quantum server. We can set the queue as auto_delete. After quantum-server stopped, the queue will be deleted automatically. Messages that sent by agent will be dropped. Then the agent won't flood the quantum-server when it startup. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1157771/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1212462] Re: arp fail/martian source
** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1212462 Title: arp fail/martian source Status in OpenStack Neutron (virtual network service): Invalid Bug description: openstack grizzly quantum from RDO on redhat 6.4. quantum net-list +--+---++ | id | name | subnets | +--+---++ | xxx3 | demo-net1 | xxxf 10.0.0.0/24 | | xxx5 | ext | xxx9 192.168.0.0/24 | | xxx0 | main | xxxd 10.0.2.0/24 | | xxxa | main | xxx1 10.0.1.0/24 | +--+---++ If I fire up a vm on net xxxa, getting address 10.0.1.4. I then give it a floating ip from external, which gets 192.168.0.13 If I ping 192.168.0.13 it doesn't work. If I leave the ping running, then restart l3-agent, after a minute or two, the ping starts working and keeps working. But after that, if I ctrl-c and restart a new ping, it doesn't work. Digging into it further, there are the following in dmesg for every failed ping: martian source 10.0.1.4 from 10.0.1.1, on dev qbrce632e01-80 ll header: ff:ff:ff:ff:ff:ff:fa:16:3e:02:8a:c3:08:06 It looks like the arp responses are getting klobbered. part of plugin.ini: [OVS] enable_tunneling=False integration_bridge=br-int tenant_network_type=vlan bridge_mappings=os1:br-os1,ext:br-ext network_vlan_ranges=os1:1000:2000,external1 ext is a provider type flat network attached to br-ext1 Any ideas what would cause arp to work for a moment during l3-agent restart then fail again? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1212462/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1274107] Re: Resources convert from plurals to single is manual
Extensions are being redesigned in Juno, so pluralization will be handled differently. ** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1274107 Title: Resources convert from plurals to single is manual Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: In Neutron extensions resources building process, plural and single resource names are converted manually by adding or removing an 's' ending. For special cases, when 's' ending is wrong, for instance policy/policies, the conversion is done by hard codding. Each extension does it, and does it bit differently. This plural-single conversion is done in several places. The proposal is to add two functions (plural2single and single2plural) to the neutron.common.utils code, so these functions will be used for conversions anywhere is needed. Those new functions should consider some common (or all, which is maybe unnecessary) special English language exceptions for single- plural converting - http://en.wikipedia.org/wiki/English_plurals As a result of this bug approval, all occurrences of plural-single conversions should be replaced by using common.util functions To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1274107/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1276587] Re: neutron not working on Havana Debian wheezy
That is not an official installation guide, so you will need to contact the author of that guide for assistance. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1276587 Title: neutron not working on Havana Debian wheezy Status in OpenStack Neutron (virtual network service): Invalid Bug description: i follow this Guide to install the Havana , but found that neutron not working, service start, have pid file but not process and no service listen on 9696. i apt-get update dist-upgrade to the lastest env. root@ops-whz-ctl:~# uname -an Linux ops-whz-ctl 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux pls give me some solution. thanks a lot https://github.com/reusserl/OpenStack-Install- Guide/blob/master/OpenStack_Havana_Debian_Wheezy_Install_Guide.rst root@ops-whz-ctl:~# keystone service-list +--+--+--+--+ |id| name | type | description | +--+--+--+--+ | 4557b26cfafe4808963d3eccae4684aa | cinder | volume | OpenStack Volume Service | | b904a4f7eadc40ddbff16f84556f201e | ec2| ec2| OpenStack EC2 service | | 03349e78b51b4647b4449c90bf27e7b1 | glance | image | OpenStack Image Service| | 8fe41661d319454185e324344df34efb | keystone | identity | OpenStack Identity | | 44d4feabf3e745fd9c56d6969489d058 | neutron | network | OpenStack Networking service | | 6c035052679143f187e87d9ec1486ad9 | nova | compute | OpenStack Compute Service | +--+--+--+--+ root@ops-whz-ctl:~# grep -r -i neutron /etc/nova /etc/nova/nova.conf:#nova.network.neutronv2.api.API (if you want to use Neutron) /etc/nova/nova.conf:network_api_class=nova.network.neutronv2.api.API /etc/nova/nova.conf:# neutron (if you use neutron) /etc/nova/nova.conf:security_group_api = neutron /etc/nova/nova.conf:# When using Neutron and OVS, use: nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver /etc/nova/nova.conf:# for Neutron, use: nova.network.linux_net.LinuxOVSInterfaceDriver /etc/nova/nova.conf:# For Neutron and OVS, use: nova.virt.firewall.NoopFirewallDriver (since this is handled by Neutron) /etc/nova/nova.conf:# Neutron # /etc/nova/nova.conf:# This is the URL of your neutron server: /etc/nova/nova.conf:neutron_url=http://10.10.10.51:9696 /etc/nova/nova.conf:neutron_auth_strategy=keystone /etc/nova/nova.conf:neutron_admin_tenant_name=service /etc/nova/nova.conf:neutron_admin_username=neutron /etc/nova/nova.conf:neutron_admin_password=servicePass123 /etc/nova/nova.conf:neutron_admin_auth_url=http://10.10.10.51:35357/v2.0 /etc/nova/nova.conf:# Set flag to indicate Neutron will proxy metadata requests /etc/nova/nova.conf:# and resolve instance ids. This is needed to use neutron-metadata-agent /etc/nova/nova.conf:# which doesn't work with neutron) (boolean value) /etc/nova/nova.conf:service_neutron_metadata_proxy=True /etc/nova/nova.conf:# Shared secret to validate proxies Neutron metadata requests /etc/nova/nova.conf:# This password should match what is in /etc/neutron/metadata_agent.ini /etc/nova/nova.conf:#neutron_metadata_proxy_shared_secret= /etc/nova/nova.conf:neutron_metadata_proxy_shared_secret = helloOpenStack123 root@ops-whz-ctl:~# grep -v ^$ /etc/neutron/neutron.conf |grep -v ^# [DEFAULT] verbose = True state_path = /var/lib/neutron lock_path = $state_path/lock core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin rabbit_host = 10.10.10.51 rabbit_password = guest rabbit_userid = guest notification_driver = neutron.openstack.common.notifier.rpc_notifier [quotas] [agent] root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf [keystone_authtoken] auth_host = 10.10.10.51 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = servicePass123 signing_dir = $state_path/keystone-signing [database] connection = mysql://neutronUser:neutronPass357@10.10.10.51/neutron [service_providers] service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default root@ops-whz-ctl:~# grep -v ^$ /etc/neutron/ |grep -v ^# api-paste.ini neutron.conf policy.json rootwrap.d/ fwaas_driver.ini plugins/ rootwrap.conf root@ops-whz-ctl:~# grep -v ^$ /etc/neutron/api-paste.ini |grep -v ^# [composite:neutron] use = egg:Paste#urlmap /:
[Yahoo-eng-team] [Bug 1269246] Re: some ariables name is not friendly
You are correct that is it bad to name variables with same name as builtins. We are planning work in Juno to significantly revamp plugins and this is one of the items to fix. ** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1269246 Title: some ariables name is not friendly Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: in neutron/db/db_base_plugin_v2.py, the code is: def _generate_ip(context, subnets): Generate an IP address. The IP address will be generated from one of the subnets defined on the network. range_qry = context.session.query( models_v2.IPAvailabilityRange).join( models_v2.IPAllocationPool).with_lockmode('update') for subnet in subnets: range = range_qry.filter_by(subnet_id=subnet['id']).first() if not range: LOG.debug(_(All IPs from subnet %(subnet_id)s (%(cidr)s) allocated), {'subnet_id': subnet['id'], 'cidr': subnet['cidr']}) continue ip_address = range['first_ip'] LOG.debug(_(Allocated IP - %(ip_address)s from %(first_ip)s to %(last_ip)s), {'ip_address': ip_address, 'first_ip': range['first_ip'], 'last_ip': range['last_ip']}) if range['first_ip'] == range['last_ip']: # No more free indices on subnet = delete LOG.debug(_(No more free IP's in slice. Deleting allocation pool.)) context.session.delete(range) else: # increment the first free range['first_ip'] = str(netaddr.IPAddress(ip_address) + 1) return {'ip_address': ip_address, 'subnet_id': subnet['id']} raise q_exc.IpAddressGenerationFailure(net_id=subnets[0]['network_id']) @staticmethod def _allocate_specific_ip(context, subnet_id, ip_address): Allocate a specific IP address on the subnet. ip = int(netaddr.IPAddress(ip_address)) range_qry = context.session.query( models_v2.IPAvailabilityRange).join( models_v2.IPAllocationPool).with_lockmode('update') results = range_qry.filter_by(subnet_id=subnet_id) for range in results: first = int(netaddr.IPAddress(range['first_ip'])) last = int(netaddr.IPAddress(range['last_ip'])) if first = ip = last: if first == last: context.session.delete(range) return elif first == ip: range['first_ip'] = str(netaddr.IPAddress(ip_address) + 1) return elif last == ip: range['last_ip'] = str(netaddr.IPAddress(ip_address) - 1) return else: # Split into two ranges new_first = str(netaddr.IPAddress(ip_address) + 1) new_last = range['last_ip'] range['last_ip'] = str(netaddr.IPAddress(ip_address) - 1) ip_range = models_v2.IPAvailabilityRange( allocation_pool_id=range['allocation_pool_id'], first_ip=new_first, last_ip=new_last) context.session.add(ip_range) return func use range as ariables name, I think it is not friendly, the name is same as range(), it can replace by ip_range or othername. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1269246/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1264408] Re: Network Node kernel panics when booting nova instance
** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1264408 Title: Network Node kernel panics when booting nova instance Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: Hi, I am using OpenStack Grizzly release on Ubuntu Server 12.04 with three node setup Controller, Network (with OpenvSwtich OVS plugin) Compute (with qemu hypervisor). Network node is setup as outlined in http://docs.openstack.org/grizzly/openstack-network/admin/content/. When I instantiate nova instance my network node panics showing traces of openvswitch ovs_tnl. Refer attached screenshot kernel-panic for more info. The nova instance gets created with active status (but unable to login). The network node recovers from panic only after quantum network is deleted. ovsdb-server.log shows : Dec 26 14:03:45|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log Dec 26 14:03:45|2|ovsdb_file|ERR|I/O error: /etc/openvswitch/conf.db: error reading 254 bytes starting at offset 217078 (End of file) Dec 26 14:07:09|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log Dec 26 14:07:09|2|ovsdb_file|ERR|I/O error: /etc/openvswitch/conf.db: error reading 548 bytes starting at offset 217078 (End of file) Dec 26 14:38:48|2|daemon|INFO|pid 1494 died, killed (Terminated), exiting Dec 26 14:39:10|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log Dec 26 15:19:57|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log Dec 26 15:19:57|2|ovsdb_file|ERR|I/O error: /etc/openvswitch/conf.db: error reading 93 bytes starting at offset 233449 (End of file) Dec 26 15:23:10|2|daemon|INFO|pid 1515 died, killed (Terminated), exiting Dec 26 15:31:25|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log Dec 26 16:02:59|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log Dec 26 16:02:59|2|ovsdb_file|ERR|I/O error: /etc/openvswitch/conf.db: error reading 982 bytes starting at offset 23995 (End of file) Dec 26 16:17:26|2|daemon|INFO|pid 1445 died, killed (Terminated), exiting Dec 26 16:17:42|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log Dec 26 16:47:04|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log ovs-vswitchd.log shows : Dec 26 16:47:04|1|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log Dec 26 16:47:04|2|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... Dec 26 16:47:04|3|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected Dec 26 16:47:04|4|bridge|INFO|created port tap13a6499d-a7 on bridge br-int Dec 26 16:47:04|5|bridge|INFO|created port br-int on bridge br-int Dec 26 16:47:04|6|bridge|INFO|created port patch-tun on bridge br-int Dec 26 16:47:04|7|bridge|INFO|created port patch-int on bridge br-tun Dec 26 16:47:04|8|bridge|INFO|created port gre-1 on bridge br-tun Dec 26 16:47:04|9|bridge|INFO|created port br-tun on bridge br-tun Dec 26 16:47:04|00010|bridge|INFO|created port eth1 on bridge br-ex Dec 26 16:47:04|00011|bridge|INFO|created port br-ex on bridge br-ex Dec 26 16:47:04|00012|ofproto|INFO|using datapath ID 002320b0e672 Dec 26 16:47:04|00013|ofproto|INFO|using datapath ID 002320acd925 Dec 26 16:47:04|00014|ofproto|INFO|using datapath ID 002320d7ea10 Dec 26 16:47:04|00015|bridge|WARN|could not open network device tap13a6499d-a7 (No such device) Dec 26 16:47:04|00016|bridge|WARN|tap13a6499d-a7 port has no interfaces, dropping Dec 26 16:47:04|00017|bridge|INFO|destroyed port tap13a6499d-a7 on bridge br-int Dec 26 16:47:04|00018|bridge|WARN|bridge br-int: using default bridge Ethernet address 06:8c:87:3e:69:4a Dec 26 16:47:04|00019|xenserver|INFO|not running on a XenServer Dec 26 16:47:04|00020|ofproto|INFO|datapath ID changed to 068c873e694a Dec 26 16:47:04|00021|bridge|WARN|bridge br-tun: using default bridge Ethernet address c6:a4:7b:6e:63:4d Dec 26 16:47:04|00022|ofproto|INFO|datapath ID changed to c6a47b6e634d Dec 26 16:47:04|00023|ofproto|INFO|datapath ID changed to 00505683b2d3 Dec 26 16:47:04|00024|bridge|INFO|destroyed port patch-int on bridge br-tun Dec 26 16:47:04|00025|bridge|INFO|destroyed port gre-1 on bridge br-tun Dec 26 16:47:04|00026|bridge|INFO|destroyed port br-tun on bridge br-tun Dec 26 16:47:04|00027|bridge|INFO|created port tap13a6499d-a7 on bridge br-int Dec 26 16:47:04|00028|bridge|WARN|could not open network device tap13a6499d-a7 (No such device) Dec 26 16:47:04|00029|bridge|WARN|tap13a6499d-a7 port has no interfaces, dropping Dec 26 16:47:04|00030|bridge|INFO|destroyed port tap13a6499d-a7 on bridge br-int Dec 26 16:47:04|00031|bridge|WARN|bridge
[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release
** Changed in: python-neutronclient Importance: Undecided = Critical ** Changed in: python-neutronclient Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: python-neutronclient Status: New = In Progress ** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1280941 Title: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release Status in Python client library for Neutron: In Progress Status in tripleo - openstack on openstack: In Progress Bug description: So we need a new release - this is fixed in: commit 02baef46968b816ac544b037297273ff6a4e8e1b but until a new release is done, anyone running trunk Neutron will have the metadata agent fail. And neutron itself is missing a versioned dep on the fixed client (but obviously that has to wait for the client release to be done) To manage notifications about this bug go to: https://bugs.launchpad.net/python-neutronclient/+bug/1280941/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release
** Also affects: neutron Importance: Undecided Status: New ** Changed in: neutron Status: New = Confirmed ** Changed in: neutron Status: Confirmed = In Progress ** Changed in: neutron Importance: Undecided = Critical ** Changed in: neutron Assignee: (unassigned) = Mark McClain (markmcclain) ** Changed in: neutron Milestone: None = icehouse-3 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1280941 Title: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release Status in OpenStack Neutron (virtual network service): In Progress Status in Python client library for Neutron: In Progress Status in tripleo - openstack on openstack: In Progress Bug description: So we need a new release - this is fixed in: commit 02baef46968b816ac544b037297273ff6a4e8e1b but until a new release is done, anyone running trunk Neutron will have the metadata agent fail. And neutron itself is missing a versioned dep on the fixed client (but obviously that has to wait for the client release to be done) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1280941/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1265495] Re: Error reading SSH protocol banner
Looking at test failures this looks to be related to kernel faults we have been seeing. For havana/stable we should disable key injection by back porting the changes to devstack stable/havana and possibly nova. ** Changed in: neutron Assignee: Nachi Ueno (nati-ueno) = Mark McClain (markmcclain) ** Also affects: devstack Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1265495 Title: Error reading SSH protocol banner Status in devstack - openstack dev environments: New Status in OpenStack Neutron (virtual network service): Confirmed Status in Tempest: Confirmed Bug description: This appears similar to bug 1210664 which is now marked as released. Once parallel testing is enabled (running with the patches for blueprint neutron-parallel-testing), this error appears frequently. One example here: http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/40bee04/console.html.gz Please note that the manifestation of this error is similar to bug 1253896, but the error is different, and possibly the root cause as well, as it seems the connection on port 22 is established successfully. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1265495/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1251784] Re: nova+neutron scheduling error: Connection to neutron failed: Maximum attempts reached
Removing Neutron since this bug has not shown up for 30 days. http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiQ29ubmVjdGlvbiB0byBuZXV0cm9uIGZhaWxlZDogTWF4aW11bSBhdHRlbXB0cyByZWFjaGVkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1zY2gudHh0XCIiLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJmcm9tIjoiMjAxMy0xMS0yM1QxOTo0ODoxNyswMDowMCIsInRvIjoiMjAxMy0xMi0yM1QxOTo0ODoxNyswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxMzg3ODMxMzA5NDQ1fQ== ** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1251784 Title: nova+neutron scheduling error: Connection to neutron failed: Maximum attempts reached Status in OpenStack Compute (Nova): Fix Released Status in tripleo - openstack on openstack: Fix Released Bug description: VMs are failing to schedule with the following error 2013-11-15 20:50:21.405 ERROR nova.scheduler.filter_scheduler [req- d2c26348-53e6-448a-8975-4f22f4e89782 demo demo] [instance: c8069c13 -593f-48fb-aae9-198961097eb2] Error from last host: devstack-precise- hpcloud-az3-662002 (node devstack-precise-hpcloud-az3-662002): [u'Traceback (most recent call last):\n', u' File /opt/stack/new/nova/nova/compute/manager.py, line 1030, in _build_instance\nset_access_ip=set_access_ip)\n', u' File /opt/stack/new/nova/nova/compute/manager.py, line 1439, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File /opt/stack/new/nova/nova/compute/manager.py, line 1436, in _spawn\nblock_device_info)\n', u' File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2100, in spawn\nadmin_pass=admin_password)\n', u' File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2451, in _create_image\ncontent=files, extra_md=extra_md, network_info=network_info)\n', u' File /opt/stack/new/nova/nova/api/metadata/base.py, line 165, in __init__\n ec2utils.get_ip_info_for_instance_from_nw_info(network_info)\n', u' File /opt/stack/new/nova/nova/api/ec2/ec2utils.py, line 149, in get_ip_info_for_instance_from_nw_info\nfixed_ips = nw_info.fixed_ips()\n', u' File /opt/stack/new/nova/nova/network/model.py, line 368, in _sync_wrapper\nself.wait()\n', u' File /opt/stack/new/nova/nova/network/model.py, line 400, in wait\n self[:] = self._gt.wait()\n', u' File /usr/local/lib/python2.7/dist- packages/eventlet/greenthread.py, line 168, in wait\nreturn self._exit_event.wait()\n', u' File /usr/local/lib/python2.7/dist- packages/eventlet/event.py, line 120, in wait\n current.throw(*self._exc)\n', u' File /usr/local/lib/python2.7/dist- packages/eventlet/greenthread.py, line 194, in main\nresult = function(*args, **kwargs)\n', u' File /opt/stack/new/nova/nova/compute/manager.py, line 1220, in _allocate_network_async\ndhcp_options=dhcp_options)\n', u' File /opt/stack/new/nova/nova/network/neutronv2/api.py, line 359, in allocate_for_instance\nnw_info = self._get_instance_nw_info(context, instance, networks=nets)\n', u' File /opt/stack/new/nova/nova/network/api.py, line 49, in wrapper\n res = f(self, context, *args, **kwargs)\n', u' File /opt/stack/new/nova/nova/network/neutronv2/api.py, line 458, in _get_instance_nw_info\nnw_info = self._build_network_info_model(context, instance, networks)\n', u' File /opt/stack/new/nova/nova/network/neutronv2/api.py, line 1022, in _build_network_info_model\nsubnets = self._nw_info_get_subnets(context, port, network_IPs)\n', u' File /opt/stack/new/nova/nova/network/neutronv2/api.py, line 924, in _nw_info_get_subnets\nsubnets = self._get_subnets_from_port(context, port)\n', u' File /opt/stack/new/nova/nova/network/neutronv2/api.py, line 1066, in _get_subnets_from_port\ndata = neutronv2.get_client(context).list_ports(**search_opts)\n', u' File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 111, in with_params\nret = self.function(instance, *args, **kwargs)\n', u' File /opt/stack/new/python- neutronclient/neutronclient/v2_0/client.py, line 306, in list_ports\n **_params)\n', u' File /opt/stack/new/python- neutronclient/neutronclient/v2_0/client.py, line 1250, in list\n for r in self._pagination(collection, path, **params):\n', u' File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 1263, in _pagination\nres = self.get(path, params=params)\n', u' File /opt/stack/new/python- neutronclient/neutronclient/v2_0/client.py, line 1236, in get\n headers=headers, params=params)\n', u' File /opt/stack/new/python- neutronclient/neutronclient/v2_0/client.py, line 1228, in retry_request\nraise exceptions.ConnectionFailed(reason=_(Maximum attempts reached))\n', u'ConnectionFailed: Connection to neutron failed: Maximum attempts
[Yahoo-eng-team] [Bug 1254890] Re: Timed out waiting for thing causes tempest-dsvm-neutron-* failures
** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1254890 Title: Timed out waiting for thing causes tempest-dsvm-neutron-* failures Status in OpenStack Neutron (virtual network service): Invalid Status in OpenStack Compute (Nova): Triaged Status in Tempest: Confirmed Bug description: Separate out bug from: https://bugs.launchpad.net/neutron/+bug/1250168/comments/23 Logstash query: message:Details: Timed out waiting for thing AND build_name:gate-tempest-devstack-vm-neutron-large-ops http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIgQU5EIGJ1aWxkX25hbWU6Z2F0ZS10ZW1wZXN0LWRldnN0YWNrLXZtLW5ldXRyb24tbGFyZ2Utb3BzIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODU0MDQ5Mzg5MjZ9 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1254890/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236617] Re: Many files still have quantum in their names
The existing files that refer to quantum exist for backwards compatibility and are expected. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1236617 Title: Many files still have quantum in their names Status in OpenStack Neutron (virtual network service): Invalid Bug description: Here is the list of files and their packages (if exist): # for i in `find / 2 /dev/null | grep -i quantum` ; do echo *** $i ; rpm -qf $i ; done *** /etc/selinux/targeted/modules/active/modules/openstack-selinux-quantum.pp file /etc/selinux/targeted/modules/active/modules/openstack-selinux-quantum.pp is not owned by any package *** /etc/selinux/targeted/modules/active/modules/quantum.pp file /etc/selinux/targeted/modules/active/modules/quantum.pp is not owned by any package *** /usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/xenapi/contrib/rpmbuild/SPECS/openstack-quantum-xen-plugins.spec openstack-neutron-openvswitch-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/api python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/api/__init__.py python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/api/__init__.pyc python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/api/__init__.pyo python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/__init__.py python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/__init__.pyc python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/auth.py python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/auth.pyo python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/__init__.pyo python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/lib/python2.6/site-packages/quantum/auth.pyc python-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/share/selinux/packages/openstack-selinux-quantum.pp.bz2 openstack-selinux-0.1.2-11.el6ost.noarch *** /usr/share/selinux/devel/include/services/quantum.if selinux-policy-3.7.19-217.el6.noarch *** /usr/share/selinux/devel/include/services/openstack-selinux-quantum.if openstack-selinux-0.1.2-11.el6ost.noarch *** /usr/share/selinux/targeted/quantum.pp.bz2 selinux-policy-targeted-3.7.19-217.el6.noarch *** /usr/bin/quantum-ns-metadata-proxy openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-usage-audit openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-rootwrap-xen-dom0 openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-ovs-cleanup openstack-neutron-openvswitch-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-server openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-debug openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-db-manage openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-rootwrap openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-l3-agent openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-dhcp-agent openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-metadata-agent openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-netns-cleanup openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-lbaas-agent openstack-neutron-2013.2-0.3.3.b3.el6ost.noarch *** /usr/bin/quantum-openvswitch-agent openstack-neutron-openvswitch-2013.2-0.3.3.b3.el6ost.noarch *** /var/log/neutron/quantum-server.log file /var/log/neutron/quantum-server.log is not owned by any package *** /sys/devices/pci:00/:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/queue/iosched/quantum file /sys/devices/pci:00/:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/queue/iosched/quantum is not owned by any package To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1236617/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236616] Re: neutron linux user's description still contains the term Quantum
This bug should filed with Redhat as user creation is outside the scope of the upstream project. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1236616 Title: neutron linux user's description still contains the term Quantum Status in OpenStack Neutron (virtual network service): Invalid Bug description: Tested on havana on RHEL. # grep -i quantum /etc/passwd neutron:x:164:164:OpenStack Quantum Daemons:/var/lib/neutron:/sbin/nologin The Quantum should be changed to Neutron. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1236616/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1237820] Re: Neutron CLI help is not friendly intuitive.
** Also affects: python-neutronclient Importance: Undecided Status: New ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1237820 Title: Neutron CLI help is not friendly intuitive. Status in OpenStack Neutron (virtual network service): Invalid Status in Python client library for Neutron: New Bug description: Neutron CLI help usage is not friendly / intuitive. There are few issues with the CLI help: 1. too much output like : - output formatters - positional arguments: - shell formatter Which doesn't exist in other components ( nova ... ) 2. There is no explain on what are expected values -admin-state-down - No explain about ( True | False ) User guide doesn't have the options http://docs.openstack.org/user-guide/content/neutron_client_commands.html only API https://wiki.openstack.org/wiki/Neutron/APIv2-specification#Network 3. additional info which relates to the extension is not added , like --provider:network_type vlan --providerhysical_network phys-net-name --provider:segmentation_id VID only in API doc http://docs.openstack.org/trunk/openstack-network/admin/content/provider_attributes.html compare netween nova help boot and neutron help net-create To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1237820/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1240874] Re: LinuxBridge plugin fails with multiple VM NICs when gateway configs differ
** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1240874 Title: LinuxBridge plugin fails with multiple VM NICs when gateway configs differ Status in OpenStack Neutron (virtual network service): Invalid Bug description: Specifically, the linux bridge plugin fails when spawning a VM with multiple networks, where those networks have different gateway enabled / disabled settings. Homogeneous network gateway settings for multiple networks works fine, however. In other words: -Spawning a VM with two NICs, both which have a Gateway IP configured, results in an Active instance (good) -Spawning a VM with two NICs, both which have the Gateway disabled, results in an Active instance (good) -Spawning a VM with two NICs, one which has a Gateway IP configured, and the other that has gateway disabled results in a Failed instance (bad). I have attached the stack trace for the plugin error it produced. The error seems to indicate a failure to add the tap device to the bridge, because the tap device doesn't exist. However, it does appear that the interface is being created - if only briefly before being removed. In fact I was able to run the brctl show command as this was happening and see each tap device get put on a bridge before disappearing 1-2 seconds afterwards. This output is also in the attachment. The one other observation I would note concerning the brctl output was that all the transient tap devices were being put on the same bridge: the one associated with VLAN 3 (eth1.3). This should only have been the case for NIC1 (first NIC). The other two should have each been on their own bridges / VLANs (eth1.6 and eth1.66). To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1240874/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1171373] Re: iptables rules for external access are not correct
Looks like this is specific to CentOS6.4. ** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1171373 Title: iptables rules for external access are not correct Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: iptables POSTROUTING configuration is not correct. Which prevents VMs from accessing external network. here is an example of this bug : https://answers.launchpad.net/quantum/+question/227120 I got it work by adding a static entry in the POSTROUTING chain to give my VMs internet access To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1171373/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1177654] Re: quantum agent-list status different across nodes.
** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1177654 Title: quantum agent-list status different across nodes. Status in OpenStack Neutron (virtual network service): Invalid Bug description: Hi, This issue has only shown up since I add the other 3 nodes, a single node install doesn't seem to have the issues below. I have 4 x Quantum Network nodes vcr01 vcr02 vcr03 vcr04. I have OVS, DHCP agents installed on each. I have also copied the quantum.conf etc files so they are identical on each node. on vcr01 I run quantum agent-list +--++---+---++ | id | agent_type | host | alive | admin_state_up | +--++---+---++ | 0630d2b3-7696-4b3d-9203-f7e162426707 | Open vSwitch agent | vcr01.bne01.cloud.labs.local | xxx | True | | 37f76ec5-2581-4e6d-9df7-cd7c7302af2a | Open vSwitch agent | vcr04.bne01.cloud.labs.local | xxx | True | | 3a1e22da-d4ba-408f-8e84-0698b9d26c43 | DHCP agent | vcr02.bne01.cloud.labs.local | xxx | True | | 458401b2-821d-45b1-be91-b64222ba32d0 | Open vSwitch agent | vcr03.bne01.cloud.labs.local | xxx | True | | 7ce64466-619c-462f-9dc6-71ebaed4bf4f | DHCP agent | vcr04.bne01.cloud.labs.local | xxx | True | | 8e18207f-cd13-4044-9e00-293f06ce9ac1 | DHCP agent | vcr01.bne01.cloud.labs.local | :-) | True | | c18efffd-ecb5-4866-983a-d3ee4e9e115d | Open vSwitch agent | vcr02.bne01.cloud.labs.local | :-) | True | | d5dd0afa-b3ef-430a-892a-a3093b5b0a02 | DHCP agent | vcr03.bne01.cloud.labs.local | xxx | True | | f9f97da8-b905-4400-b920-7c7883b25fd6 | Open vSwitch agent | pvmh01.bne01.cloud.labs.local | xxx | True | +--++---+---++ on vcr02 i run quantum agent-list +--++---+---++ | id | agent_type | host | alive | admin_state_up | +--++---+---++ | 0630d2b3-7696-4b3d-9203-f7e162426707 | Open vSwitch agent | vcr01.bne01.cloud.labs.local | :-) | True | | 37f76ec5-2581-4e6d-9df7-cd7c7302af2a | Open vSwitch agent | vcr04.bne01.cloud.labs.local | :-) | True | | 3a1e22da-d4ba-408f-8e84-0698b9d26c43 | DHCP agent | vcr02.bne01.cloud.labs.local | :-) | True | | 458401b2-821d-45b1-be91-b64222ba32d0 | Open vSwitch agent | vcr03.bne01.cloud.labs.local | :-) | True | | 7ce64466-619c-462f-9dc6-71ebaed4bf4f | DHCP agent | vcr04.bne01.cloud.labs.local | :-) | True | | 8e18207f-cd13-4044-9e00-293f06ce9ac1 | DHCP agent | vcr01.bne01.cloud.labs.local | :-) | True | | c18efffd-ecb5-4866-983a-d3ee4e9e115d | Open vSwitch agent | vcr02.bne01.cloud.labs.local | :-) | True | | d5dd0afa-b3ef-430a-892a-a3093b5b0a02 | DHCP agent | vcr03.bne01.cloud.labs.local | :-) | True | | f9f97da8-b905-4400-b920-7c7883b25fd6 | Open vSwitch agent | pvmh01.bne01.cloud.labs.local | :-) | True | +--++---+---++ on vcr03 i get the same result as vcr02 on vcr04 i get a similar result as to vcr01 Strangely enough the agent-list result pretty much changes inconstantly on vcr01 vcr04. ie: the xxx and :-) status changes allot. as per below. http://pastie.org/pastes/7816343/text log files seem ok i don't see any errors my eyes might be painted on. I tried turning off the vm's dumping the quantum database and recreating it powed on the vm's and they all came up as :-) however after a reboot of one of the nodes the behavior as above returned. I have keystone setup as UUID as well. quantum.conf http://pastie.org/pastes/7816357/text api-paste.ini http://pastie.org/pastes/7816369/text?key=shmkvpqhvkq6dnvpneajpq dhcp_agent.ini http://pastie.org/pastes/7816377/text?key=lfiqbqwuaxvnjiiqowdmq plugins/openvswitch/ovs_quantum_plugin.ini http://pastie.org/pastes/7816381/text?key=mk1gq4c3xqymeud49zgw - Other thoughts might it be rabbitmq? i'm using mirrored queues with the rabbitmq repo.
[Yahoo-eng-team] [Bug 1186161] Re: Tempest tests sometimes get: [Fail] Couldn't ping server
*** This bug is a duplicate of bug 1224001 *** https://bugs.launchpad.net/bugs/1224001 This is a duplicate of multi other gate bugs. ** This bug has been marked a duplicate of bug 1224001 test_network_basic_ops fails waiting for network to become available -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1186161 Title: Tempest tests sometimes get: [Fail] Couldn't ping server Status in OpenStack Neutron (virtual network service): New Status in OpenStack Compute (Nova): New Bug description: When I have seen this, usually both volumes and floating_ips tests fail at the same time. Looks like the instance is actually active, but network connectivity is somehow is missing. See here for an example of this happening: http://logs.openstack.org/29429/1/gate/gate-tempest-devstack-vm-quantum/26568/ Not sure it this is the same as bug 1101142. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1186161/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1247758] Re: Unauthorized command when using neutron-rootwrap for dhcp-agent
Looks like you're using the Grizzly version of the rootwrap config. Make sure you have the latest version of etc/neutron/rootwrap.d/dhcp.filters ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1247758 Title: Unauthorized command when using neutron-rootwrap for dhcp-agent Status in OpenStack Neutron (virtual network service): Invalid Bug description: Hi list, I'm working under CentOS + Havana. When I try to start neutron-dhcp-agent, I get the following error: 2013-11-01 13:47:05.110 21349 TRACE neutron.agent.dhcp_agent Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qdhcp-e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74', 'env', 'NEUTRON_NETWORK_ID=e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74', 'dnsmasq', '--no-hosts', '--no-resolv', '--strict-order', '--bind-interfaces', '--interface=ns-a66f8745-aa', '--except-interface=lo', '--pid-file=/var/lib/neutron/dhcp/e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74/pid', '--dhcp-hostsfile=/var/lib/neutron/dhcp/e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74/host', '--dhcp-optsfile=/var/lib/neutron/dhcp/e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74/opts', '--leasefile-ro', '--dhcp-range=tag0,10.1.0.0,static,120s', '--dhcp-lease-max=65536', '--conf-file=', '--domain=openstacklocal'] 2013-11-01 13:47:05.110 21349 TRACE neutron.agent.dhcp_agent Exit code: 99 2013-11-01 13:47:05.110 21349 TRACE neutron.agent.dhcp_agent Stdout: '' 2013-11-01 13:47:05.110 21349 TRACE neutron.agent.dhcp_agent Stderr: 'WARNING:root:Skipping unknown filter class (DnsmasqFilter) specified in filter definitions\nWARNING:root:Skipping unknown filter class (DnsmasqNetnsFilter) specified in filter definitions\nWARNING:root:Skipping unknown filter class (DnsmasqFilter) specified in filter definitions\n/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec qdhcp-e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74 env NEUTRON_NETWORK_ID=e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=ns-a66f8745-aa --except-interface=lo --pid-file=/var/lib/neutron/dhcp/e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74/host --dhcp-optsfile=/var/lib/neutron/dhcp/e3fffbcf-7e75-4187-b5e9-daa1a5e3bd74/opts --leasefile-ro --dhcp-range=tag0,10.1.0.0,static,120s --dhcp-lease-max=65536 --conf-file= --domain=openstacklocal (no filter matched)\n' 2013-11-01 13:47:05.110 21349 TRACE neutron.agent.dhcp_agent This issue can be work around by setting “root_helper=sudo”. But, I’m still really curios about why this happen and how to solve it, because we know “sudo” is not safe. Anyone has ideas? Thanks. -chen To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1247758/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1246811] Re: ssh connections to guest time out
*** This bug is a duplicate of bug 1224001 *** https://bugs.launchpad.net/bugs/1224001 ** This bug has been marked a duplicate of bug 1224001 test_network_basic_ops fails waiting for network to become available -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1246811 Title: ssh connections to guest time out Status in OpenStack Neutron (virtual network service): New Bug description: During the tempest scenario test: test_network_basic_ops the step where the test tries to login into the guest fails with a ssh timeout. With this traceback: Traceback (most recent call last): File tempest/scenario/test_network_basic_ops.py, line 269, in test_network_basic_ops self._check_public_network_connectivity() File tempest/scenario/test_network_basic_ops.py, line 258, in _check_public_network_connectivity raise exc SSHTimeout: Connection to the 172.24.4.232 via SSH timed out. User: cirros, Password: None Logs for the failed run are here: http://logs.openstack.org/88/52988/9/gate/gate-tempest-devstack-vm- neutron-pg-isolated/2bf5a98/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1246811/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1206925] Re: Creation of the qr interface fails sometimes
This is likely related to the L3 message ordering bug present in Grizzly. This was addressed in the Havana release. If this shows up, we can reopen this bug. ** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1206925 Title: Creation of the qr interface fails sometimes Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: Version: Grizzly 2013.1.1 I have a 3 nodes Grizzly setup with one Controller, Network and Compute(KVM) node. I am using VLAN mode. My use case require me to create network, create instances, terminate instances and terminate networks. So I do a lot of network creation and deletion. It works fine most of the time. But occasionally when an instance is launched, the instance is unable to reach the external network. When this happens I see that the 'qr-x' interface is not created as shown below: root@openstack-dev-network# ip netns exec qrouter-81c474cb-acad-42b4-bc78-875992fa33f6 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 0.0.0.0 10.5.60.1 0.0.0.0 UG0 00 qg-81e2a77d-71 1.1.1.0 0.0.0.0 255.255.252.0 U 0 00 qg-81e2a77d-71 root@openstack-dev-network # I see the following error in /var/log/quantum/openvswitch-agent.log Stderr: 'ovs-vsctl: no row qr-fb582b82-f7 in table Interface\n' 2013-07-29 06:14:18ERROR [quantum.agent.linux.ovs_lib] Unable to execute ['ovs-vsctl', '--timeout=2', 'get', 'Interface', 'qr-a4f24e08-aa', 'external_ids']. Exception: Command: ['sudo', '/usr/bin/quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'qr-a4f24e08-aa', 'external_ids'] Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: no row qr-a4f24e08-aa in table Interface\n' And on restarting the quantum services, the interface gets created and the instance is able to reach the external network. root@openstack-dev-network:~# cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done quantum-dhcp-agent stop/waiting quantum-dhcp-agent start/running, process 20872 quantum-l3-agent stop/waiting quantum-l3-agent start/running, process 20885 quantum-metadata-agent stop/waiting quantum-metadata-agent start/running, process 20894 quantum-plugin-openvswitch-agent stop/waiting quantum-plugin-openvswitch-agent start/running, process 20903 root@openstack-dev-network:/etc/init.d# root@openstack-dev-network# ip netns exec qrouter-81c474cb-acad-42b4-bc78-875992fa33f6 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 0.0.0.0 10.5.60.1 0.0.0.0 UG0 00 qg-81e2a77d-71 1.1.1.0 0.0.0.0 255.255.252.0 U 0 00 qg-81e2a77d-71 192.168.3.0 0.0.0.0 255.255.255.0 U 0 00 qr-a4f24e08-aa root@openstack-dev-network # I do have the root_helper in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini. Regards, Balu To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1206925/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1212834] Re: Cisco plugin (grizzly) device connection errors
** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1212834 Title: Cisco plugin (grizzly) device connection errors Status in OpenStack Neutron (virtual network service): Won't Fix Status in neutron grizzly series: Fix Released Bug description: The Cisco plugin in grizzly does not reuse device connection so it drops XML queries when it's limit of 8 connections is reached. Need to reuse netconf connections in the Nexus driver for queries. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1212834/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214132] Re: no config for multiple neutron-server workers
** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1214132 Title: no config for multiple neutron-server workers Status in OpenStack Neutron (virtual network service): Invalid Bug description: Neutron-server processes access the database directly. Since the database connection driver is typically implemented in a library beyond the purview of eventlet’s monkeypatching (i.e., a native python extension like _mysql.so), blocking database calls will block all eventlet coroutines. Since much of what neutron-server does is access the database, a neutron-server process’s handling of requests is effectively serial. To make running multiple neutron-server processes on the same host straightforward, there should be a workers=N option in the [DEFAULT] section of neutron.conf -- just like the osapi_compute_workers=N flag in the [DEFAULT] section of nova.conf. When the option is in effect, N server processes should handle HTTP and RPC requests made to neutron- server. I'm going to submit a patch. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1214132/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1224001] Re: test_network_basic_ops fails waiting for network to become available
** Changed in: neutron Status: Fix Released = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1224001 Title: test_network_basic_ops fails waiting for network to become available Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Object Storage (Swift): Invalid Status in Tempest: Invalid Bug description: See http://logs.openstack.org/25/43125/4/gate/gate-tempest-devstack- vm-neutron/40f3725/console.html 2013-09-11 16:20:48.981 | == 2013-09-11 16:20:48.982 | FAIL: tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops[gate,smoke] 2013-09-11 16:20:48.982 | tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops[gate,smoke] 2013-09-11 16:20:48.982 | -- 2013-09-11 16:20:48.982 | _StringException: Empty attachments: 2013-09-11 16:20:48.982 | stderr 2013-09-11 16:20:48.982 | stdout 2013-09-11 16:20:48.982 | 2013-09-11 16:20:48.983 | pythonlogging:'': {{{ 2013-09-11 16:20:48.983 | 2013-09-11 16:05:44,209 Starting new HTTP connection (1): 127.0.0.1 2013-09-11 16:20:48.983 | 2013-09-11 16:05:44,309 Starting new HTTP connection (1): 127.0.0.1 2013-09-11 16:20:48.983 | 2013-09-11 16:05:52,724 Tenant networks not configured to be reachable. 2013-09-11 16:20:48.983 | }}} 2013-09-11 16:20:48.983 | 2013-09-11 16:20:48.983 | Traceback (most recent call last): 2013-09-11 16:20:48.984 | File tempest/scenario/test_network_basic_ops.py, line 262, in test_network_basic_ops 2013-09-11 16:20:48.984 | self._check_public_network_connectivity() 2013-09-11 16:20:48.984 | File tempest/scenario/test_network_basic_ops.py, line 251, in _check_public_network_connectivity 2013-09-11 16:20:48.984 | self._check_vm_connectivity(ip_address, ssh_login, private_key) 2013-09-11 16:20:48.984 | File tempest/scenario/manager.py, line 579, in _check_vm_connectivity 2013-09-11 16:20:48.984 | timeout=self.config.compute.ssh_timeout), 2013-09-11 16:20:48.984 | File tempest/scenario/manager.py, line 569, in _is_reachable_via_ssh 2013-09-11 16:20:48.985 | return ssh_client.test_connection_auth() 2013-09-11 16:20:48.985 | File tempest/common/ssh.py, line 148, in test_connection_auth 2013-09-11 16:20:48.985 | connection = self._get_ssh_connection() 2013-09-11 16:20:48.985 | File tempest/common/ssh.py, line 76, in _get_ssh_connection 2013-09-11 16:20:48.985 | password=self.password) 2013-09-11 16:20:48.985 | SSHTimeout: Connection to the 172.24.4.229 via SSH timed out. 2013-09-11 16:20:48.986 | User: cirros, Password: None To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1224001/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238293] Re: Admin for tenant can view ports belonging to other tenants upon executing quantum port-list
The v2 API is well established for over 3 releases now. We should not change this behavior until we decided it is time to offer a v3 API. ** Changed in: neutron Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1238293 Title: Admin for tenant can view ports belonging to other tenants upon executing quantum port-list Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: Currently, if we create two networks, say net1 and net2, for two different tenants, tenant1 and tenant2 respectively, and add ports to these networks, quantum port-list run by an admin user of tenant1 is able to view ports belonging to tenant2. This is not expected behavior. An admin user of tenant1 should be able to view all ports within that tenant, but not those belonging to another tenant. It looks like quantum isn't correclty using the scope and non-scope tokens that are passed to it, when retrieving port/network info from the quantum database. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1238293/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1233264] Re: stable branch patches failing in check queue due to missing 'find_resourceid_by_name_or_id'
** Also affects: python-neutronclient Importance: Undecided Status: New ** Changed in: neutron Status: In Progress = Invalid ** Changed in: python-neutronclient Status: New = Fix Released ** Changed in: python-neutronclient Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1233264 Title: stable branch patches failing in check queue due to missing 'find_resourceid_by_name_or_id' Status in OpenStack Neutron (virtual network service): Invalid Status in Python client library for Neutron: Fix Released Bug description: Patches on the stable branches (at least stable/grizzly) are failing in Jenkins because this change removed the 'find_resourceid_by_name_or_id' method from the quantumclient: https://github.com/openstack/python- neutronclient/commit/cbb83121c09f95b00720f494ab5f424612ac207d Here is a test report failure: http://logs.openstack.org/00/48300/1/check/gate-nova- python27/e47c623/console.html It's a static method which looks like it just needs to be proxied from python-neutronclient. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1233264/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1234455] Re: Tenant delete does not clean up the networks or VMs
This is a known issue with all services in OpenStack. Due to the loose coupling between services there's no way to know a tenant has been removed. This is a cross project issue and would need a blueprint and design discussion in the keystone project. ** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1234455 Title: Tenant delete does not clean up the networks or VMs Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: If a tenant has networks and VM instances created under it. If tenant-delete is issued, it should either be rejected as there are objects under it or those objects should be automatically cleaned. Neither of that takes place. To reproduce this bug: - Create a tenant (keystone tenant-create ) - create a network for this tenant (neutron net-create) - Instantiate a VM for this tenant ( nova boot) - Now delete the tenant (keystone tenant-delete) This operation succeeds. However now you are left with a situation where you do not have a tenant, but, networks and VM belonging to it remain active. This whole thing can be created from Horizon as well. The worst part about doing from Horizon is that you do dot see tenant and networks on horizon after the tenant is deleted. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1234455/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1211778] Re: NoSuchOptError: no such option: amqp_auto_delete
** Changed in: neutron Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1211778 Title: NoSuchOptError: no such option: amqp_auto_delete Status in OpenStack Neutron (virtual network service): Invalid Bug description: As of 334e646170d2fae302bf3132033706daf66020a6 all my Neutron jobs are failing with the following stack trace: == /var/log/neutron/server.log == 2013-08-13 13:27:36.749 23634 ERROR neutron.service [-] In serve_wsgi() 2013-08-13 13:27:36.749 23634 TRACE neutron.service Traceback (most recent call last): 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/neutron/service.py, line 96, in serve_wsgi 2013-08-13 13:27:36.749 23634 TRACE neutron.service service.start() 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/neutron/service.py, line 65, in start 2013-08-13 13:27:36.749 23634 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/neutron/service.py, line 109, in _run_wsgi 2013-08-13 13:27:36.749 23634 TRACE neutron.service app = config.load_paste_app(app_name) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/neutron/common/config.py, line 144, in load_paste_app 2013-08-13 13:27:36.749 23634 TRACE neutron.service app = deploy.loadapp(config:%s % config_path, name=app_name) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 247, in loadapp 2013-08-13 13:27:36.749 23634 TRACE neutron.service return loadobj(APP, uri, name=name, **kw) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 272, in loadobj 2013-08-13 13:27:36.749 23634 TRACE neutron.service return context.create() 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 710, in create 2013-08-13 13:27:36.749 23634 TRACE neutron.service return self.object_type.invoke(self) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 144, in invoke 2013-08-13 13:27:36.749 23634 TRACE neutron.service **context.local_conf) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/util.py, line 56, in fix_call 2013-08-13 13:27:36.749 23634 TRACE neutron.service val = callable(*args, **kw) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/paste/urlmap.py, line 25, in urlmap_factory 2013-08-13 13:27:36.749 23634 TRACE neutron.service app = loader.get_app(app_name, global_conf=global_conf) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 350, in get_app 2013-08-13 13:27:36.749 23634 TRACE neutron.service name=name, global_conf=global_conf).create() 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 710, in create 2013-08-13 13:27:36.749 23634 TRACE neutron.service return self.object_type.invoke(self) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 144, in invoke 2013-08-13 13:27:36.749 23634 TRACE neutron.service **context.local_conf) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/util.py, line 56, in fix_call 2013-08-13 13:27:36.749 23634 TRACE neutron.service val = callable(*args, **kw) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/neutron/auth.py, line 59, in pipeline_factory 2013-08-13 13:27:36.749 23634 TRACE neutron.service app = loader.get_app(pipeline[-1]) 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 350, in get_app 2013-08-13 13:27:36.749 23634 TRACE neutron.service name=name, global_conf=global_conf).create() 2013-08-13 13:27:36.749 23634 TRACE neutron.service File /usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py, line 710, in create 2013-08-13 13:27:36.749 23634 TRACE neutron.service return
[Yahoo-eng-team] [Bug 1099099] Re: OVS plugin: when admin status is set to False, port status is still ACTIVE
** Changed in: neutron/folsom Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1099099 Title: OVS plugin: when admin status is set to False, port status is still ACTIVE Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron folsom series: Won't Fix Bug description: Updating the admin status should set the port STATUS accordingly To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1099099/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1083944] Re: RPC exchange name defaults to 'openstack'
** Changed in: neutron/folsom Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1083944 Title: RPC exchange name defaults to 'openstack' Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron folsom series: Won't Fix Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) folsom series: Fix Released Status in OpenStack Manuals: Fix Released Bug description: Nova on stable/folsom is using 'openstack' as its exchange name. Openstack-common was updated on stable/folsom with this change: https://review.openstack.org/#/c/16532/ Which brought in this change that removed the default 'nova' exchange config option: https://review.openstack.org/#/c/12876/ Make projects define 'control_exchange'. The 'control_exchange' option needs to have a project-specific default value. Just don't register this option and expect it to be registered by the project using this code, at least for now. ** IMPORTANT NOTE WHEN IMPORTING THIS CHANGE ** If you are importing this change into a project that uses rpc, you must add the control_exchange option in your code! *** Change-Id: Ida5a8637c419e709bbf22fcad57b0f11c31bb959 But stable/folsom nova.conf was never updated. The control_exchange option has been removed from nova.conf on master too now: https://review.openstack.org/#/c/15940/ Found this debugging a situation where a child cell couldn't eceive messages from a parent cell. Strangely, setting control_exchange='openstack' fixed our problem. We're still trying to work out *why* there was a mismatch in exchange names at all, given that all cells were running the same nova code and nearly identical configs. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1083944/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1225232] Re: add qpid-python to neutron's requirements.txt
qpid is an optional dependency of Neutron (and really OpenStack in general). If you want to enable it, these instructions must be followed: https://wiki.openstack.org/wiki/QpidSupport. ** Changed in: neutron Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1225232 Title: add qpid-python to neutron's requirements.txt Status in OpenStack Neutron (virtual network service): Won't Fix Bug description: If one tries using qpid instead of rabbitmq with heat, when neutron- server starts up it will complain that the qpid.messaging library is missing. To fix this, qpid-python should be included in requirements.txt. 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config File /opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_qpid.py, line 753, in create_connection 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config rpc_amqp.get_connection_pool(conf, Connection)) 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config File /opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/openstack/common/rpc/amqp.py, line 522, in create_connection 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config return ConnectionContext(conf, connection_pool, pooled=not new) 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config File /opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/openstack/common/rpc/amqp.py, line 126, in __init__ 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config server_params=server_params) 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config File /opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_qpid.py, line 432, in __init__ 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config raise ImportError(Failed to import qpid.messaging) 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config ImportError: Failed to import qpid.messaging 2013-09-13 23:29:56.576 8875 TRACE neutron.common.config To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1225232/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1223568] Re: pep8 requirements failing
Closing this as this has resolved itself. This issue was mostly related to an upstream library. ** Changed in: neutron Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1223568 Title: pep8 requirements failing Status in OpenStack Neutron (virtual network service): Invalid Bug description: Requirements seem to be failing on version check for six 2013-09-10 20:24:53.377 | using tox.ini: /home/jenkins/workspace/gate-neutron-pep8/tox.ini 2013-09-10 20:24:53.377 | using tox-1.6.1 from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc 2013-09-10 20:24:53.378 | GLOB sdist-make: /home/jenkins/workspace/gate-neutron-pep8/setup.py 2013-09-10 20:24:53.388 | /home/jenkins/workspace/gate-neutron-pep8$ /usr/bin/python /home/jenkins/workspace/gate-neutron-pep8/setup.py sdist --formats=zip --dist-dir /home/jenkins/workspace/gate-neutron-pep8/.tox/dist /home/jenkins/workspace/gate-neutron-pep8/.tox/log/tox-0.log 2013-09-10 20:24:56.957 | pep8 create: /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8 2013-09-10 20:24:56.968 | /home/jenkins/workspace/gate-neutron-pep8/.tox$ /usr/bin/python /usr/local/lib/python2.7/dist-packages/virtualenv.py --setuptools --python /usr/bin/python pep8 /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/log/pep8-0.log 2013-09-10 20:24:58.578 | pep8 installdeps: -r/home/jenkins/workspace/gate-neutron-pep8/requirements.txt, -r/home/jenkins/workspace/gate-neutron-pep8/test-requirements.txt, setuptools_git=0.4 2013-09-10 20:24:58.586 | /home/jenkins/workspace/gate-neutron-pep8$ /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/bin/pip install --pre -r/home/jenkins/workspace/gate-neutron-pep8/requirements.txt -r/home/jenkins/workspace/gate-neutron-pep8/test-requirements.txt setuptools_git=0.4 /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/log/pep8-1.log 2013-09-10 20:25:52.038 | pep8 inst: /home/jenkins/workspace/gate-neutron-pep8/.tox/dist/neutron-2013.2.a552.gaa68a3d.zip 2013-09-10 20:25:52.058 | /home/jenkins/workspace/gate-neutron-pep8$ /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/bin/pip install --pre /home/jenkins/workspace/gate-neutron-pep8/.tox/dist/neutron-2013.2.a552.gaa68a3d.zip /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/log/pep8-2.log 2013-09-10 20:25:57.902 | pep8 runtests: commands[0] | flake8 2013-09-10 20:25:57.907 | /home/jenkins/workspace/gate-neutron-pep8$ /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/bin/flake8 2013-09-10 20:25:58.112 | Traceback (most recent call last): 2013-09-10 20:25:58.112 | File .tox/pep8/bin/flake8, line 9, in module 2013-09-10 20:25:58.112 | load_entry_point('flake8==2.0', 'console_scripts', 'flake8')() 2013-09-10 20:25:58.112 | File /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/local/lib/python2.7/site-packages/flake8/main.py, line 21, in main 2013-09-10 20:25:58.113 | flake8_style = get_style_guide(parse_argv=True, config_file=DEFAULT_CONFIG) 2013-09-10 20:25:58.113 | File /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/local/lib/python2.7/site-packages/flake8/engine.py, line 75, in get_style_guide 2013-09-10 20:25:58.114 | kwargs['parser'], options_hooks = get_parser() 2013-09-10 20:25:58.114 | File /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/local/lib/python2.7/site-packages/flake8/engine.py, line 38, in get_parser 2013-09-10 20:25:58.114 | (extensions, parser_hooks, options_hooks) = _register_extensions() 2013-09-10 20:25:58.115 | File /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/local/lib/python2.7/site-packages/flake8/engine.py, line 24, in _register_extensions 2013-09-10 20:25:58.115 | checker = entry.load() 2013-09-10 20:25:58.115 | File /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/local/lib/python2.7/site-packages/pkg_resources.py, line 2259, in load 2013-09-10 20:25:58.116 | if require: self.require(env, installer) 2013-09-10 20:25:58.116 | File /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/local/lib/python2.7/site-packages/pkg_resources.py, line 2272, in require 2013-09-10 20:25:58.117 | working_set.resolve(self.dist.requires(self.extras),env,installer))) 2013-09-10 20:25:58.117 | File /home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/local/lib/python2.7/site-packages/pkg_resources.py, line 630, in resolve 2013-09-10 20:25:58.118 | raise VersionConflict(dist,req) # XXX put more info here 2013-09-10 20:25:58.119 | pkg_resources.VersionConflict: (six 1.4.1 (/home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/lib/python2.7/site-packages), Requirement.parse('six1.4.0')) 2013-09-10 20:25:58.131 | ERROR: InvocationError: '/home/jenkins/workspace/gate-neutron-pep8/.tox/pep8/bin/flake8' To manage notifications about this bug go to:
[Yahoo-eng-team] [Bug 1209011] Re: L3 agent can't handle updates that change floating ip id
** Changed in: neutron Status: Fix Released = In Progress ** Changed in: neutron Milestone: havana-3 = havana-rc1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1209011 Title: L3 agent can't handle updates that change floating ip id Status in OpenStack Neutron (virtual network service): In Progress Bug description: The problem occurs when a network update comes along where a new floating ip id carries the same (reused) IP address as an old floating IP. In short, same address, different floating ip id. We've seen this occur in testing where the floating ip free pool has gotten small and creates/deletes come quickly. What happens is the agent skips calling ip addr add for the address since the address already appears. It then calls ip addr del to remove the address from the qrouter's gateway interface. It shouldn't have done this and the floating ip is left in a non-working state. Later, when the floating ip is disassociated from the port, the agent attempts to remove the address from the device which results in an exception which is caught above. The exception prevents the iptables code from removing the DNAT address for the floating ip. 2013-07-23 09:20:06.094 3109 DEBUG quantum.agent.linux.utils [-] Running command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-2b75022a-3721-443f-af99-ec648819d080', 'ip', '-4', 'addr', 'del', '15.184.103.155/32', 'dev', 'qg-c847c5a7-62'] execute /usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py:42 2013-07-23 09:20:06.179 3109 DEBUG quantum.agent.linux.utils [-] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-2b75022a-3721-443f-af99-ec648819d080', 'ip', '-4', 'addr', 'del', '15.184.103.155/32', 'dev', 'qg-c847c5a7-62'] Exit code: 2 Stdout: '' Stderr: 'RTNETLINK answers: Cannot assign requested address\n' execute /usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py:59 The DNAT entries in the iptables stay in a bad state from this point on sometimes preventing other floating ip addresses from being attached to the same instance. I have a fix for this that is currently in testing. Will submit for review when it is ready. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1209011/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1220505] Re: IP will be allocated automate even it is a floating IP
If the network is shared or owned by the tenant and you boot without specifying a network, the vm will be given ports on all available networks. (This is default behavior in Grizzly) ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1220505 Title: IP will be allocated automate even it is a floating IP Status in OpenStack Neutron (virtual network service): Invalid Bug description: I'm working under Centos 6.4 + Grizzly. I have created two networks, one for instances private network, and another one for public network (for floating IP ). Everything works fine. But , if I create an instance without point out the private network id, such as: nova boot --flavor m1.tiny --image c4302a6f-196d-4d3e-be64-c9413e8d1f71 test1 The instance will be start with both network: | d99fd089-5afe-4397-b51b-767485b43383 | test1 | ACTIVE | public=192.168.14.29; private=10.1.0.243 | The network works fine, but, I don't want the instance has the public IP. And, I think because I already assigned this public network to a router, so it is clear that it is not an auto assign IP. Also, if it can be auto assigned to an instance, it should be a floating IP ,but not like what it is now. Any ideas? Thanks. -chen To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1220505/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1182616] Re: Sudoers / rootwrap - no tty present and no askpass program specified
Since the fix is outside Neutron closing this ticket. ** Changed in: neutron Status: Fix Committed = Invalid ** Changed in: neutron Milestone: havana-3 = None -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1182616 Title: Sudoers / rootwrap - no tty present and no askpass program specified Status in OpenStack Neutron (virtual network service): Invalid Bug description: 2013-05-21 19:59:54DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'ip', 'netns', 'exec', 'qdhcp-1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5', 'quantum-ns-metadata-proxy', '--pid_file=/var/lib/quantum/external/pids/1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5.pid', '--network_id=1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5', '--state_path=/var/lib/quantum', '--metadata_port=80', '--debug', '--verbose', '--log-file=quantum-ns-metadata-proxy1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5.log', '--log-dir=/var/log/quantum'] Exit code: 1 Stdout: '' Stderr: 'sudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: 3 incorrect password attempts\n' 2013-05-21 19:59:54ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py, line 430, in _process_data rval = self.proxy.dispatch(ctxt, version, method, **args) File /usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/dispatcher.py, line 133, in dispatch return getattr(proxyobj, method)(ctxt, **kwargs) File /usr/lib/python2.7/dist-packages/quantum/openstack/common/lockutils.py, line 242, in inner retval = f(*args, **kwargs) File /usr/lib/python2.7/dist-packages/quantum/agent/dhcp_agent.py, line 234, in network_create_end self.enable_dhcp_helper(network_id) File /usr/lib/python2.7/dist-packages/quantum/agent/dhcp_agent.py, line 188, in enable_dhcp_helper self.enable_isolated_metadata_proxy(network) File /usr/lib/python2.7/dist-packages/quantum/agent/dhcp_agent.py, line 329, in enable_isolated_metadata_proxy pm.enable(callback) File /usr/lib/python2.7/dist-packages/quantum/agent/linux/external_process.py, line 55, in enable ip_wrapper.netns.execute(cmd) File /usr/lib/python2.7/dist-packages/quantum/agent/linux/ip_lib.py, line 407, in execute check_exit_code=check_exit_code) File /usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py, line 61, in execute raise RuntimeError(m) RuntimeError: Command: ['sudo', 'ip', 'netns', 'exec', 'qdhcp-1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5', 'quantum-ns-metadata-proxy', '--pid_file=/var/lib/quantum/external/pids/1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5.pid', '--network_id=1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5', '--state_path=/var/lib/quantum', '--metadata_port=80', '--debug', '--verbose', '--log-file=quantum-ns-metadata-proxy1f93a3a9-a4fa-473a-a1b6-23aee3a92ca5.log', '--log-dir=/var/log/quantum'] Exit code: 1 Stdout: '' Stderr: 'sudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: 3 incorrect password attempts\n' To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1182616/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1209042] Re: Neutron cannot load extension
If the plugin does not support the extension it will never be loaded. The supported extensions are see decanted in the supoorted_extendion_aliases. Rather than develop your own OVS QoS plugin I'd recommend you contact the assignee of the h3 blueprint who is building an ext for he community. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1209042 Title: Neutron cannot load extension Status in OpenStack Neutron (virtual network service): Invalid Bug description: I have a demand to develop QoS for OVS Plugin. So, I put the ‘Qos.py’ in the '/neutron/plugins/openvswitch/extensions' and set 'api_extensions_path = /neutron/plugins/openvswitch/extensions' in 'api-paste.ini'. But it can not be loaded. I traced the loading process, I found the path could not be recognized by 'os.path.exists()'. In file 'neutron/api/extensions.py', I find the code: def get_extensions_path(): paths = ':'.join(neutron.extensions.__path__) if cfg.CONF.api_extensions_path: paths = ':'.join([cfg.CONF.api_extensions_path, paths]) return paths According to the codes, i must configure absolute path, such as api_extensions_path = /usr/lib/python2.7/dist- packages/neutron/plugins/openvswitch/extensions. However, it think it's complicated. I have changed the codes to add the prefix path to 'api_extensions_path': def get_extensions_path(): #spch paths = ':'.join(quantum.extensions.__path__) #get the prefix path prefix = /.join(quantum.__path__[0].split(/)[:-1]) if cfg.CONF.api_extensions_path: #split the api_extensions_path by : ext_paths = cfg.CONF.api_extensions_path.split(:) #add prefix for each path for i in range(len(ext_paths)): ext_paths[i] = prefix + ext_paths[i] ext_paths.append(paths) paths = :.join(ext_paths) return paths I do not know whether it's a bug or i do not understand the question. So, please help me. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1209042/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1207196] Re: Documentation tells that to create subnet and port (using xml formatted body), no request body is required while without request body subnet or port can't be created
** Changed in: openstack-api-site Status: New = Confirmed ** Changed in: neutron Status: New = Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1207196 Title: Documentation tells that to create subnet and port (using xml formatted body), no request body is required while without request body subnet or port can't be created Status in OpenStack Neutron (virtual network service): Opinion Status in OpenStack API documentation site: Confirmed Bug description: In the page http://api.openstack.org/api-ref.html#netconn-api Subnet and Port section tells that subnet/port creation does not require a request body. While without request body no subnet/port can be created. When I tried subnet creation without body then 400(bad request response) and body required error displayed. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1207196/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp