[Yahoo-eng-team] [Bug 1793389] Re: Upgrade to Ocata: Keystone Intermittent Missing 'options' Key
** Also affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1793389 Title: Upgrade to Ocata: Keystone Intermittent Missing 'options' Key Status in OpenStack Identity (keystone): New Status in openstack-ansible: Fix Released Bug description: During upgrades of Newton-EOL AIOs to Ocata, Keystone installation fails at the "Ensure service tenant" play of the os-keystone_install. This occurs using the provided run-upgrade.sh script. Keystone logs are thus: INFO keystone.common.wsgi [req-11844ac2-f2d5-46b6-986d-05019432f264 - - - - -] HEAD http://aio1-keystone-container-14a3e1ad:5000/ DEBUG keystone.middleware.auth [req-6523488f-be1a-4ba7-a264-6b6b8ca4c936 - - - - -] There is either no auth token in the request or the certificate issuer is not trusted. No auth context will be set. fill_context /openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/middleware/auth.py:188 INFO keystone.common.wsgi [req-6523488f-be1a-4ba7-a264-6b6b8ca4c936 - - - - -] POST http://172.29.236.66:35357/v3/auth/tokens ERROR keystone.common.wsgi [req-6523488f-be1a-4ba7-a264-6b6b8ca4c936 - - - - -] 'options' ERROR keystone.common.wsgi Traceback (most recent call last): ERROR keystone.common.wsgi File "/openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in __call__ ERROR keystone.common.wsgi result = method(req, **params) ERROR keystone.common.wsgi File "/openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/auth/controllers.py", line 132, in authenticate_for_token ERROR keystone.common.wsgi auth_context['user_id'], method_names_set): ERROR keystone.common.wsgi File "/openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/auth/core.py", line 377, in check_auth_methods_against_rules ERROR keystone.common.wsgi mfa_rules = user_ref['options'].get(ro.MFA_RULES_OPT.option_name, []) ERROR keystone.common.wsgi KeyError: 'options' It appears that the sql identity backend ensures an 'options' key should exist with .../keystone/identity/backends/sql_schema.py:225, but obviously that code's not being hit. It should be noted that rerunning the install process causes it to be successful. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1793389/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1670419] Re: placement_database config option help is wrong
** Also affects: openstack-ansible Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1670419 Title: placement_database config option help is wrong Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: Fix Committed Status in OpenStack Compute (nova) ocata series: Fix Committed Status in openstack-ansible: In Progress Bug description: The help on the [placement_database] config options is wrong, it mentions Ocata 14.0.0 but 14.0.0 is actually Newton, Ocata was 15.0.0: "# The *Placement API Database* is a separate database which is used for the new # placement-api service. In Ocata release (14.0.0) this database is optional:" It also has some scary words about configuring it with a separate database so you don't have to deal with data migration issues later to migrate data from the nova_api database to a separate placement database, but the placement_database options are not actually used in code. They will be when this blueprint is complete: https://blueprints.launchpad.net/nova/+spec/optional-placement- database To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1670419/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1718356] Re: Include default config files in python wheel
@Matt I'll be patching both neutron and glance to include more files or to optimise the implementation. I will be adding more projects as I go through them - I ended up getting pulled into something else yesterday before completing this. ** Also affects: cinder Importance: Undecided Status: New ** Also affects: keystone Importance: Undecided Status: New ** Changed in: cinder Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: keystone Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Also affects: barbican Importance: Undecided Status: New ** Changed in: barbican Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Also affects: designate Importance: Undecided Status: New ** Changed in: designate Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Also affects: heat Importance: Undecided Status: New ** Changed in: heat Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Also affects: ironic Importance: Undecided Status: New ** Changed in: ironic Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Also affects: octavia Importance: Undecided Status: New ** Changed in: octavia Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Also affects: magnum Importance: Undecided Status: New ** Also affects: sahara Importance: Undecided Status: New ** Also affects: trove Importance: Undecided Status: New ** Changed in: magnum Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: trove Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: sahara Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1718356 Title: Include default config files in python wheel Status in Barbican: New Status in Cinder: New Status in Designate: New Status in Glance: New Status in OpenStack Heat: New Status in Ironic: New Status in OpenStack Identity (keystone): New Status in Magnum: New Status in neutron: New Status in OpenStack Compute (nova): New Status in octavia: New Status in openstack-ansible: New Status in Sahara: New Status in OpenStack DBaaS (Trove): New Bug description: The projects which deploy OpenStack from source or using python wheels currently have to either carry templates for api-paste, policy and rootwrap files or need to source them from git during deployment. This results in some rather complex mechanisms which could be radically simplified by simply ensuring that all the same files are included in the built wheel. A precedence for this has already been set in neutron [1] and glance [2] through the use of the data_files option in the files section of setup.cfg. [1] https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39 [2] https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21 This bug will be used for a cross-project implementation of patches to normalise the implementation across the OpenStack projects. Hopefully the result will be a consistent implementation across all the major projects. To manage notifications about this bug go to: https://bugs.launchpad.net/barbican/+bug/1718356/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1718356] [NEW] Include default config files in python wheel
Public bug reported: The projects which deploy OpenStack from source or using python wheels currently have to either carry templates for api-paste, policy and rootwrap files or need to source them from git during deployment. This results in some rather complex mechanisms which could be radically simplified by simply ensuring that all the same files are included in the built wheel. A precedence for this has already been set in neutron [1] and glance [2] through the use of the data_files option in the files section of setup.cfg. [1] https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39 [2] https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21 This bug will be used for a cross-project implementation of patches to normalise the implementation across the OpenStack projects. Hopefully the result will be a consistent implementation across all the major projects. ** Affects: glance Importance: Undecided Assignee: Jesse Pretorius (jesse-pretorius) Status: New ** Affects: neutron Importance: Undecided Assignee: Jesse Pretorius (jesse-pretorius) Status: New ** Affects: nova Importance: Undecided Assignee: Jesse Pretorius (jesse-pretorius) Status: New ** Affects: openstack-ansible Importance: Undecided Assignee: Jesse Pretorius (jesse-pretorius) Status: New ** Also affects: neutron Importance: Undecided Status: New ** Also affects: glance Importance: Undecided Status: New ** Changed in: neutron Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: glance Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Also affects: nova Importance: Undecided Status: New ** Changed in: nova Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1718356 Title: Include default config files in python wheel Status in Glance: New Status in neutron: New Status in OpenStack Compute (nova): New Status in openstack-ansible: New Bug description: The projects which deploy OpenStack from source or using python wheels currently have to either carry templates for api-paste, policy and rootwrap files or need to source them from git during deployment. This results in some rather complex mechanisms which could be radically simplified by simply ensuring that all the same files are included in the built wheel. A precedence for this has already been set in neutron [1] and glance [2] through the use of the data_files option in the files section of setup.cfg. [1] https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39 [2] https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21 This bug will be used for a cross-project implementation of patches to normalise the implementation across the OpenStack projects. Hopefully the result will be a consistent implementation across all the major projects. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1718356/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1698900] [NEW] DB check appears to not be working right
Public bug reported: Using current master of keystone, executing a keystone-manage db_sync --check seems to always return the RC of 2 regardless of the steps previously confirmed. This happens until the contract is done, then it returns 0. Steps to reproduce: root@keystone1:/# mysqladmin drop keystone; mysql create keystone root@keystone1:/# keystone-manage db_sync --check; echo $? 2 root@keystone1:/# keystone-manage db_sync --expand; echo $? 0 root@keystone1:/# keystone-manage db_sync --check; echo $? 2 root@keystone1:/# keystone-manage db_sync --migrate; echo $? 0 root@keystone1:/# keystone-manage db_sync --check; echo $? 2 root@keystone1:/# keystone-manage db_sync --contract; echo $? 0 root@keystone1:/# keystone-manage db_sync --check; echo $? 0 Not getting the right return codes or advise from the check can spell disaster for automation that uses it, or humans following the documented migration process. ** Affects: keystone Importance: High Status: Confirmed ** Tags: sql -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1698900 Title: DB check appears to not be working right Status in OpenStack Identity (keystone): Confirmed Bug description: Using current master of keystone, executing a keystone-manage db_sync --check seems to always return the RC of 2 regardless of the steps previously confirmed. This happens until the contract is done, then it returns 0. Steps to reproduce: root@keystone1:/# mysqladmin drop keystone; mysql create keystone root@keystone1:/# keystone-manage db_sync --check; echo $? 2 root@keystone1:/# keystone-manage db_sync --expand; echo $? 0 root@keystone1:/# keystone-manage db_sync --check; echo $? 2 root@keystone1:/# keystone-manage db_sync --migrate; echo $? 0 root@keystone1:/# keystone-manage db_sync --check; echo $? 2 root@keystone1:/# keystone-manage db_sync --contract; echo $? 0 root@keystone1:/# keystone-manage db_sync --check; echo $? 0 Not getting the right return codes or advise from the check can spell disaster for automation that uses it, or humans following the documented migration process. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1698900/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1690756] [NEW] cache 'backend' argument description is ambiguous
Public bug reported: The oslo.cache backend argument description currently states: "Dogpile.cache backend module. It is recommended that Memcache or Redis (dogpile.cache.redis) be used in production deployments. For eventlet- based or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For low thread servers, dogpile.cache.memcached is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend." So the dogpile.cache.memcached/dogpile.cache.redis backends should be used for production deployments, but the dogpile cache is recommended for low thread servers and the oslo_cache.memcache_pool should be used for high thread servers. I don't understand what the actual recommendation is here. For a production deployment of a service using uwsgi and a web server, what is the recommendation? For a production deployment of a service using uwsgi and no web server, what is the recommendation? For a production deployment of a service using eventlet, what is the recommendation? Using keystone as an example, the example config file has the same content which does not really help to clarify anything: https://github.com/openstack/keystone/blob/b7bd6e301964d393ac6835111a08bbf15ba73bc0/etc/keystone.conf.sample#L514-L520 ** Affects: keystone Importance: Undecided Status: New ** Affects: oslo.cache Importance: Undecided Status: New ** Also affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1690756 Title: cache 'backend' argument description is ambiguous Status in OpenStack Identity (keystone): New Status in oslo.cache: New Bug description: The oslo.cache backend argument description currently states: "Dogpile.cache backend module. It is recommended that Memcache or Redis (dogpile.cache.redis) be used in production deployments. For eventlet-based or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For low thread servers, dogpile.cache.memcached is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend." So the dogpile.cache.memcached/dogpile.cache.redis backends should be used for production deployments, but the dogpile cache is recommended for low thread servers and the oslo_cache.memcache_pool should be used for high thread servers. I don't understand what the actual recommendation is here. For a production deployment of a service using uwsgi and a web server, what is the recommendation? For a production deployment of a service using uwsgi and no web server, what is the recommendation? For a production deployment of a service using eventlet, what is the recommendation? Using keystone as an example, the example config file has the same content which does not really help to clarify anything: https://github.com/openstack/keystone/blob/b7bd6e301964d393ac6835111a08bbf15ba73bc0/etc/keystone.conf.sample#L514-L520 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1690756/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1642212] [NEW] RFE: keystone-manage db_sync --check
Public bug reported: In the automation of deployments and upgrades it would be useful to be able to check whether there are any database actions outstanding so that the action can be determined and executed. Effectively I'm thinking something along the lines of this experience: Operator (or automation tool) executes: keystone-manage db_sync --check The tool checks the db state and returns whether there are any migrations to execute (ie --expand), whether there is a --migrate outstanding, whether there is a --contract outstanding, or any combination of the above. If there is nothing left to do, that should be reported too. Ideally the output should take two forms: 1 - stdout messages... obviously this is useful when executing this by hand 2 - return codes... this is very useful when executing via automation tooling The return codes need to be actionable - ie I must know which actions are required based on the return code with no ambiguity. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1642212 Title: RFE: keystone-manage db_sync --check Status in OpenStack Identity (keystone): New Bug description: In the automation of deployments and upgrades it would be useful to be able to check whether there are any database actions outstanding so that the action can be determined and executed. Effectively I'm thinking something along the lines of this experience: Operator (or automation tool) executes: keystone-manage db_sync --check The tool checks the db state and returns whether there are any migrations to execute (ie --expand), whether there is a --migrate outstanding, whether there is a --contract outstanding, or any combination of the above. If there is nothing left to do, that should be reported too. Ideally the output should take two forms: 1 - stdout messages... obviously this is useful when executing this by hand 2 - return codes... this is very useful when executing via automation tooling The return codes need to be actionable - ie I must know which actions are required based on the return code with no ambiguity. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1642212/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1361235] Re: visit horizon failure because of import module failure
** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Status: New => In Progress ** Changed in: openstack-ansible Assignee: (unassigned) => Donovan Francesco (donovan-francesco) ** Changed in: openstack-ansible Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1361235 Title: visit horizon failure because of import module failure Status in OpenStack Dashboard (Horizon): In Progress Status in openstack-ansible: In Progress Status in osprofiler: In Progress Status in python-mistralclient: Fix Released Status in tripleo: Fix Released Bug description: 1. Use TripleO to deploy both undercloud, and overcloud, and enable horizon when building images. 2. Visit horizon portal always failure, and has below errors in horizon_error.log [Wed Aug 20 01:45:58.441221 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] mod_wsgi (pid=5035): Exception occurred processing WSGI script '/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/django.wsgi'. [Wed Aug 20 01:45:58.441273 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] Traceback (most recent call last): [Wed Aug 20 01:45:58.441294 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 187, in __call__ [Wed Aug 20 01:45:58.449979 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] self.load_middleware() [Wed Aug 20 01:45:58.45 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/base.py", line 44, in load_middleware [Wed Aug 20 01:45:58.450556 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] for middleware_path in settings.MIDDLEWARE_CLASSES: [Wed Aug 20 01:45:58.450576 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__ [Wed Aug 20 01:45:58.454248 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] self._setup(name) [Wed Aug 20 01:45:58.454269 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py", line 49, in _setup [Wed Aug 20 01:45:58.454305 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] self._wrapped = Settings(settings_module) [Wed Aug 20 01:45:58.454319 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init __.py", line 128, in __init__ [Wed Aug 20 01:45:58.454338 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] mod = importlib.import_module(self.SETTINGS_MODULE) [Wed Aug 20 01:45:58.454350 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module [Wed Aug 20 01:45:58.462806 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] __import__(name) [Wed Aug 20 01:45:58.462826 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py", line 28, in [Wed Aug 20 01:45:58.467136 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] from openstack_dashboard import exceptions [Wed Aug 20 01:45:58.467156 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/exceptions.py", line 22, in [Wed Aug 20 01:45:58.467667 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] from keystoneclient import exceptions as keystoneclient [Wed Aug 20 01:45:58.467685 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/__init__.py", line 28, in [Wed Aug 20 01:45:58.472968 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] from keystoneclient import client [Wed Aug 20 01:45:58.472989 2014] [:error] [pid 5035:tid 3038755648] [remote 10.74.104.27:54198] File "/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/client.py", line 13, in [Wed Aug 20 01:45:58.473833 2014] [:error] [pid 5035:tid 3038755648]
[Yahoo-eng-team] [Bug 1640319] Re: AttributeError: 'module' object has no attribute 'convert_to_boolean'
** Changed in: openstack-ansible Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1640319 Title: AttributeError: 'module' object has no attribute 'convert_to_boolean' Status in networking-midonet: In Progress Status in neutron: In Progress Status in openstack-ansible: Fix Released Status in vmware-nsx: New Bug description: With latest neutron master code, neutron service q-svc could start due to the following error: 2016-11-08 21:54:39.435 DEBUG oslo_concurrency.lockutils [-] Lock "manager" released by "neutron.manager._create_instance" :: held 1.467s from (pid=18534) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282 2016-11-08 21:54:39.435 ERROR neutron.service [-] Unrecoverable error: please check log for details. 2016-11-08 21:54:39.435 TRACE neutron.service Traceback (most recent call last): 2016-11-08 21:54:39.435 TRACE neutron.service File "/opt/stack/neutron/neutron/service.py", line 87, in serve_wsgi 2016-11-08 21:54:39.435 TRACE neutron.service service.start() 2016-11-08 21:54:39.435 TRACE neutron.service File "/opt/stack/neutron/neutron/service.py", line 63, in start 2016-11-08 21:54:39.435 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2016-11-08 21:54:39.435 TRACE neutron.service File "/opt/stack/neutron/neutron/service.py", line 289, in _run_wsgi 2016-11-08 21:54:39.435 TRACE neutron.service app = config.load_paste_app(app_name) 2016-11-08 21:54:39.435 TRACE neutron.service File "/opt/stack/neutron/neutron/common/config.py", line 125, in load_paste_app 2016-11-08 21:54:39.435 TRACE neutron.service app = loader.load_app(app_name) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/oslo_service/wsgi.py", line 353, in load_app 2016-11-08 21:54:39.435 TRACE neutron.service return deploy.loadapp("config:%s" % self.config_path, name=name) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2016-11-08 21:54:39.435 TRACE neutron.service return loadobj(APP, uri, name=name, **kw) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in loadobj 2016-11-08 21:54:39.435 TRACE neutron.service return context.create() 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create 2016-11-08 21:54:39.435 TRACE neutron.service return self.object_type.invoke(self) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call 2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 31, in urlmap_factory 2016-11-08 21:54:39.435 TRACE neutron.service app = loader.get_app(app_name, global_conf=global_conf) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2016-11-08 21:54:39.435 TRACE neutron.service name=name, global_conf=global_conf).create() 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create 2016-11-08 21:54:39.435 TRACE neutron.service return self.object_type.invoke(self) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call 2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw) 2016-11-08 21:54:39.435 TRACE neutron.service File "/opt/stack/neutron/neutron/auth.py", line 71, in pipeline_factory 2016-11-08 21:54:39.435 TRACE neutron.service app = loader.get_app(pipeline[-1]) 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2016-11-08 21:54:39.435 TRACE neutron.service name=name, global_conf=global_conf).create() 2016-11-08 21:54:39.435 TRACE neutron.service File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
[Yahoo-eng-team] [Bug 1279611] Re: urlparse is incompatible for python 3
** No longer affects: openstack-ansible -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1279611 Title: urlparse is incompatible for python 3 Status in Astara: Fix Committed Status in Ceilometer: Fix Released Status in Cinder: Fix Released Status in gce-api: In Progress Status in Glance: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in openstack-doc-tools: Fix Released Status in python-barbicanclient: Fix Released Status in python-cinderclient: Fix Committed Status in python-neutronclient: Fix Released Status in RACK: Fix Committed Status in Sahara: Fix Released Status in Solar: Invalid Status in storyboard: Fix Committed Status in surveil: In Progress Status in OpenStack Object Storage (swift): Fix Released Status in swift-bench: Fix Committed Status in OpenStack DBaaS (Trove): Fix Released Status in tuskar: Fix Released Status in vmware-nsx: Fix Committed Status in zaqar: Fix Released Status in Zuul: Fix Committed Bug description: import urlparse should be changed to : import six.moves.urllib.parse as urlparse for python3 compatible. To manage notifications about this bug go to: https://bugs.launchpad.net/astara/+bug/1279611/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1612959] Re: neutron DB sync fails: ImportError: No module named tests
** Changed in: openstack-ansible Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1612959 Title: neutron DB sync fails: ImportError: No module named tests Status in neutron: Fix Released Status in openstack-ansible: Fix Released Bug description: neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head Traceback (most recent call last): File "/usr/bin/neutron-db-manage", line 10, in sys.exit(main()) File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 686, in main return_val |= bool(CONF.command.func(config, CONF.command.name)) File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 205, in do_upgrade run_sanity_checks(config, revision) File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 670, in run_sanity_checks script_dir.run_env() File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in run_env util.load_python_file(self.dir, 'env.py') File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load_python_file module = load_module_py(module_id, path) File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in load_module_py mod = imp.load_source(module_id, path, fp) File "/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py", line 23, in from neutron.db.migration.models import head # noqa File "/usr/lib/python2.7/site-packages/neutron/db/migration/models/head.py", line 66, in from neutron.tests import tools ImportError: No module named tests - The issue seems to be that in commit we started using code from neutron.tests outside of the testing code. Specifically commit 7c0f189309789ebcbd5c20c5a86835576ffb5db3 now causes it to get used during DB sync. Given that some distribution packages don't package up the 'tests' code tree I think we shouldn't be using this code. See also: grep -lir neutron.tests * | grep -v tests cmd/sanity/checks.py db/migration/models/head.py hacking/checks.py To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1612959/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population
** No longer affects: openstack-ansible ** No longer affects: openstack-ansible/trunk -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1523031 Title: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population Status in neutron: New Bug description: Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor table (ip neigh show) on the compute node lacks an entry for the router IP address. For example, using a router with 172.16.1.1 and instance with 172.16.1.4: On the node with the L3 agent containing the router: # ip neigh show 169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT 10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE 10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data. ... On the node with the instance: # ip neigh show 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE 10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE 10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE 172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the instance from within both DHCP agent namespaces. On the node with the instance, I manually add a neighbor entry for the router: # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466 nud permanent On the node with the L3 agent containing the router: # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms 64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms 64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605742] Re: Paramiko 2.0 is incompatible with Mitaka
** Changed in: openstack-ansible Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1605742 Title: Paramiko 2.0 is incompatible with Mitaka Status in OpenStack Compute (nova): Invalid Status in openstack-ansible: Fix Released Bug description: Unexpected API Error. TypeError. Code: 500. os-keypairs v2.1 nova (stable/mitaka , 98b38df57bfed3802ce60ee52e4450871fccdbfa) Tempest tests (for example TestMinimumBasicScenario:test_minimum_basic_scenario) are failed on gate job for project openstack-ansible with such error (please find full logs [1]) : - 2016-07-22 18:46:07.399604 | 2016-07-22 18:46:07.399618 | Captured pythonlogging: 2016-07-22 18:46:07.399632 | ~~~ 2016-07-22 18:46:07.399733 | 2016-07-22 18:45:47,861 2312 DEBUG [tempest.scenario.manager] paths: img: /opt/images/cirros-0.3.4-x86_64-disk.img, container_fomat: bare, disk_format: qcow2, properties: None, ami: /opt/images/cirros-0.3.4-x86_64-blank.img, ari: /opt/images/cirros-0.3.4-x86_64-initrd, aki: /opt/images/cirros-0.3.4-x86_64-vmlinuz 2016-07-22 18:46:07.399799 | 2016-07-22 18:45:48,513 2312 INFO [tempest.lib.common.rest_client] Request (TestMinimumBasicScenario:test_minimum_basic_scenario): 201 POST http://172.29.236.100:9292/v1/images 0.651s 2016-07-22 18:46:07.399889 | 2016-07-22 18:45:48,513 2312 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'x-image-meta-name': 'tempest-scenario-img--306818818', 'x-image-meta-container_format': 'bare', 'X-Auth-Token': '', 'x-image-meta-disk_format': 'qcow2', 'x-image-meta-is_public': 'False'} 2016-07-22 18:46:07.399907 | Body: None 2016-07-22 18:46:07.400027 | Response - Headers: {'status': '201', 'content-length': '481', 'content-location': 'http://172.29.236.100:9292/v1/images', 'connection': 'close', 'location': 'http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe', 'date': 'Fri, 22 Jul 2016 18:45:48 GMT', 'content-type': 'application/json', 'x-openstack-request-id': 'req-6b3c6218-b3e6-4884-bb3c-b88c70733d0c'} 2016-07-22 18:46:07.400183 | Body: {"image": {"status": "queued", "deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": "2016-07-22T18:45:48.00", "owner": "1fbbcc542db344f394b4f1565a7e48fd", "min_disk": 0, "is_public": false, "deleted_at": null, "id": "5c390277-ec8d-4d82-b8d8-b8978473ecbe", "size": 0, "virtual_size": null, "name": "tempest-scenario-img--306818818", "checksum": null, "created_at": "2016-07-22T18:45:48.00", "disk_format": "qcow2", "properties": {}, "protected": false}} 2016-07-22 18:46:07.400241 | 2016-07-22 18:45:48,517 2312 INFO [tempest.common.glance_http] Request: PUT http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe 2016-07-22 18:46:07.400359 | 2016-07-22 18:45:48,517 2312 INFO [tempest.common.glance_http] Request Headers: {'Transfer-Encoding': 'chunked', 'User-Agent': 'tempest', 'Content-Type': 'application/octet-stream', 'X-Auth-Token': 'gABXkmnbJaM7C2EMxfEELQEWlU27v4pCt_9tF_XGlYrgEu-eXvDcEclzZc2OyFnVy79Dfz_pH2gGvKveSTihW-hzV6ucHyF1JrdqwOYr6Z7ZoUe_0BQ4gOdxKZoqzSaqQKfdfrZnojq9OE9Dy11frFI59qqkk0303j3fWlFIUeV6NtrzX-s'} 2016-07-22 18:46:07.400403 | 2016-07-22 18:45:48,517 2312 DEBUG [tempest.common.glance_http] Actual Path: /v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe 2016-07-22 18:46:07.400440 | 2016-07-22 18:45:50,721 2312 INFO [tempest.common.glance_http] Response Status: 200 2016-07-22 18:46:07.400555 | 2016-07-22 18:45:50,722 2312 INFO [tempest.common.glance_http] Response Headers: [('date', 'Fri, 22 Jul 2016 18:45:50 GMT'), ('content-length', '518'), ('etag', 'ee1eca47dc88f4879d8a229cc70a07c6'), ('content-type', 'application/json'), ('x-openstack-request-id', 'req-2e385c60-1755-4221-8325-caa98da1f760')] 2016-07-22 18:46:07.400597 | 2016-07-22 18:45:50,723 2312 DEBUG [tempest.scenario.manager] image:5c390277-ec8d-4d82-b8d8-b8978473ecbe 2016-07-22 18:46:07.400669 | 2016-07-22 18:45:52,416 2312 INFO [tempest.lib.common.rest_client] Request (TestMinimumBasicScenario:test_minimum_basic_scenario): 500 POST http://172.29.236.100:8774/v2.1/1fbbcc542db344f394b4f1565a7e48fd/os-keypairs 1.689s 2016-07-22 18:46:07.400778 | 2016-07-22 18:45:52,416 2312 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''} 2016-07-22 18:46:07.400813 | Body: {"keypair": {"name": "tempest-TestMinimumBasicScenario-1803650811"}} 2016-07-22 18:46:07.400940 | Response - Headers: {'status': '500', 'content-length': '193', 'content-location':
[Yahoo-eng-team] [Bug 1613299] Re: Unknown column 'r.project_id' in FWaaS migrations
** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Milestone: None => newton-3 ** Changed in: openstack-ansible Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1613299 Title: Unknown column 'r.project_id' in FWaaS migrations Status in neutron: New Status in openstack-ansible: New Bug description: Running the FWaaS router insertion migration fails with: http://logs.openstack.org/01/354101/4/gate/gate-openstack-ansible- os_nova-ansible-func-ubuntu- trusty/4b14021/console.html#_2016-08-15_13_44_02_455515 Specific issue: "oslo_db.exception.DBError: (pymysql.err.InternalError) (1054, u"Unknown column 'r.project_id' in 'where clause'") [SQL: u'insert into firewall_router_associations select f.id as fw_id, r.id as router_id from firewalls f, routers r where f.tenant_id=r.project_id']" Issue occurs when installing Neutron master using OpenStack-Ansible on Ubuntu 14.04 from source. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1613299/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1433172] Re: L3 HA routers master state flapping between nodes after router updates or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)
** Also affects: openstack-ansible Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1433172 Title: L3 HA routers master state flapping between nodes after router updates or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6) Status in neutron: Triaged Status in openstack-ansible: New Bug description: keepalived 1.2.14 introduced a regression when running it in no-preempt mode. More details here in a thread I started on the keepalived-devel list: http://sourceforge.net/p/keepalived/mailman/message/33604497/ A fix was backported to 1.2.15-6, and is present in 1.2.16. Current status (Updated on the 30th of April, 2015): Fedora 20, 21 and 22 have 1.2.16. CentOS and RHEL are on 1.2.13 Ubuntu is using 1.2.10 or older. Debian is using 1.2.13. In summary, as long as you're not using 1.2.14 or 1.2.15 (Excluding 1.2.15-6), you're OK, which should be the case if you're using the latest keepalived packaged for your distro. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1433172/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1497272] Re: L3 HA: Unstable rescheduling time for keepalived v1.2.7
** Also affects: openstack-ansible Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1497272 Title: L3 HA: Unstable rescheduling time for keepalived v1.2.7 Status in neutron: Triaged Status in openstack-ansible: New Status in openstack-manuals: New Bug description: I have tested work of L3 HA on environment with 3 controllers and 1 compute (Kilo) with this simple scenario: 1) ping vm by floating ip 2) disable master l3-agent (which ha_state is active) 3) wait for pings to continue and another agent became active 4) check number of packages that were lost My results are following: 1) When max_l3_agents_per_router=2, 3 to 4 packages were lost. 2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled on every agent), 10 to 70 packages were lost. I should mention that in both cases there was only one ha router. It is expected that less packages will be lost when max_l3_agents_per_router=3(0). To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1509312] Re: unable to use tenant network after kilo to liberty update due to port security extension
** Changed in: openstack-ansible/trunk Status: Confirmed => Won't Fix ** Changed in: openstack-ansible/trunk Status: Won't Fix => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1509312 Title: unable to use tenant network after kilo to liberty update due to port security extension Status in neutron: Fix Released Status in openstack-ansible: Confirmed Status in openstack-ansible liberty series: Fix Released Status in openstack-ansible trunk series: Fix Released Bug description: After updating to liberty from kilo all networks created in kilo release are useless in liberty. If i try to spawn a new isntance with a port on a network created in kilo i get the following error in nova-compute.log : BadRequest: Port does not have port security binding. I guess this has to do with the new extension in ml2 plugin port_security. Using neutron DVR on Ubuntu 14.04.3! This is my first bug report so sry in advance for any mistakes. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1509312/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population
** Changed in: openstack-ansible/trunk Milestone: 13.0.0 => newton-1 ** No longer affects: openstack-ansible/liberty -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1523031 Title: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population Status in neutron: New Status in openstack-ansible: Confirmed Status in openstack-ansible trunk series: Confirmed Bug description: Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor table (ip neigh show) on the compute node lacks an entry for the router IP address. For example, using a router with 172.16.1.1 and instance with 172.16.1.4: On the node with the L3 agent containing the router: # ip neigh show 169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT 10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE 10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data. ... On the node with the instance: # ip neigh show 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE 10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE 10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE 172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the instance from within both DHCP agent namespaces. On the node with the instance, I manually add a neighbor entry for the router: # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466 nud permanent On the node with the L3 agent containing the router: # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms 64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms 64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller
As this patch has been included in Mitaka, I'm marking it as resolved for the OpenStack-Ansible 13.0.0 release. ** Changed in: openstack-ansible/trunk Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1443421 Title: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller Status in neutron: Fix Released Status in openstack-ansible: Fix Released Status in openstack-ansible trunk series: Fix Released Bug description: Using multiple api_workers, for "nova live-migration" command, a) tunnel flows and tunnel ports are always removed from old host b) and other hosts(sometimes) not getting notification about port delete from old host. So in other hosts, tunnel ports and flood flows(except unicast flow about port) for old host still remain. Root cause and fix is explained in comments 12 and 13. According to bug reporter, this bug can also be reproducible like below. Setup : Neutron server HA (3 nodes). Hypervisor – ESX with OVsvapp l2 POP is on Network node and off on Ovsvapp. Condition: Make L2 pop on OVs agent, api workers =10 in the controller. On network node,the VXLAN tunnel is created with ESX2 and the Tunnel with ESX1 is not removed after migrating VM from ESX1 to ESX2. Attaching the logs of servers and agent logs. stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show 662d03fb-c784-498e-927c-410aa6788455 Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth2" Interface "eth2" Port br-ex Interface br-ex type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-6447007a" Interface "vxlan-6447007a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} This should have been deleted after MIGRATION. Port "vxlan-64470082" Interface "vxlan-64470082" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"} Port br-tun Interface br-tun type: internal Port "vxlan-6447002a" Interface "vxlan-6447002a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"} Bridge "br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Port "phy-br-eth1" Interface "phy-br-eth1" type: patch options: {peer="int-br-eth1"} Bridge br-int fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "int-br-eth1" Interface "int-br-eth1" type: patch options: {peer="phy-br-eth1"} Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap9515e5b3-ec" tag: 11 Interface "tap9515e5b3-ec" type: internal ovs_version: "2.0.2" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560993] Re: keystone_service returns ignore_other_regions error in liberty
This doesn't appear to relate to any code used in OpenStack-Ansible as a project. It does appear to be Ansible of some sort, and perhaps relates to the Ansible modules. If that is so then the bug should be raised against the Ansible project I guess. ** Changed in: openstack-ansible Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1560993 Title: keystone_service returns ignore_other_regions error in liberty Status in OpenStack Identity (keystone): Invalid Status in openstack-ansible: Invalid Bug description: I am trying to port swiftacular from Havana to liberty. The following line to create the service endpoint using keystone_service returns an error : - name: create keystone identity point keystone_service: insecure=yes name=keystone type=identity description="Keystone Identity Service" publicurl="https://{{ keystone_server }}:5000/v2.0" internalurl="https://{{ keystone_server }}:5000/v2.0" adminurl="https://{{ keystone_server }}:35357/v2.0" region={{ keystone_region }} token={{ keystone_admin_token }} endpoint="https://127.0.0.1:35357/v2.0; returns the following error TASK [authentication : create keystone identity point] * fatal: [swift-keystone-01]: FAILED! => {"changed": false, "failed": true, "msg": "value of ignore_other_regions must be one of: yes,on,1,true,1,True,no,off,0,false,0,False, got: False"} to retry, use: --limit @site.retry The same task worked without a hitch with havana. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1560993/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561947] [NEW] Glance Store fails to authenticate to Swift with Keystone v3 API
Public bug reported: In the Mitaka current RC for Glance, using HEAD of "stable/mitaka" as of 23.03.2016 (SHA ab0562550c8c568dcdc7da68afdcac5f58d20e69), glance_store fails to authenticate via the Keystone v3 API to Swift. Configuration and the error are available here: https://gist.github.com/odyssey4me/79a1e8d7dea35ddf818c It appears that this may be a regression (this worked just fine in Liberty) introduced by https://github.com/openstack/glance_store/commit/1b782cee8552ec02f7303ee6f9ba9d1f2c180d07 ** Affects: glance Importance: Undecided Status: New ** Affects: openstack-ansible Importance: Critical Status: Confirmed ** Tags: mitaka-rc-potential ** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Milestone: None => 13.0.0 ** Changed in: openstack-ansible Importance: Undecided => Critical ** Changed in: openstack-ansible Status: New => Confirmed ** Tags added: mitaka-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1561947 Title: Glance Store fails to authenticate to Swift with Keystone v3 API Status in Glance: New Status in openstack-ansible: Confirmed Bug description: In the Mitaka current RC for Glance, using HEAD of "stable/mitaka" as of 23.03.2016 (SHA ab0562550c8c568dcdc7da68afdcac5f58d20e69), glance_store fails to authenticate via the Keystone v3 API to Swift. Configuration and the error are available here: https://gist.github.com/odyssey4me/79a1e8d7dea35ddf818c It appears that this may be a regression (this worked just fine in Liberty) introduced by https://github.com/openstack/glance_store/commit/1b782cee8552ec02f7303ee6f9ba9d1f2c180d07 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1561947/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1509312] Re: unable to use tenant network after kilo to liberty update due to port security extension
** Changed in: neutron Status: Expired => Confirmed ** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Status: New => Confirmed ** Changed in: openstack-ansible Importance: Undecided => High ** Changed in: openstack-ansible Assignee: (unassigned) => Nolan Brubaker (nolan-brubaker) ** Also affects: openstack-ansible/liberty Importance: Undecided Status: New ** Also affects: openstack-ansible/trunk Importance: High Assignee: Nolan Brubaker (nolan-brubaker) Status: Confirmed ** Changed in: openstack-ansible/liberty Importance: Undecided => High ** Changed in: openstack-ansible/liberty Status: New => Confirmed ** Changed in: openstack-ansible/liberty Assignee: (unassigned) => Nolan Brubaker (nolan-brubaker) ** Changed in: openstack-ansible/liberty Milestone: None => 12.1.0 ** Changed in: openstack-ansible/trunk Milestone: None => 13.0.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1509312 Title: unable to use tenant network after kilo to liberty update due to port security extension Status in neutron: Confirmed Status in openstack-ansible: Confirmed Status in openstack-ansible liberty series: Confirmed Status in openstack-ansible trunk series: Confirmed Bug description: After updating to liberty from kilo all networks created in kilo release are useless in liberty. If i try to spawn a new isntance with a port on a network created in kilo i get the following error in nova-compute.log : BadRequest: Port does not have port security binding. I guess this has to do with the new extension in ml2 plugin port_security. Using neutron DVR on Ubuntu 14.04.3! This is my first bug report so sry in advance for any mistakes. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1509312/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1509312] Re: unable to use tenant network after kilo to liberty update due to port security extension
** Changed in: openstack-ansible/liberty Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1509312 Title: unable to use tenant network after kilo to liberty update due to port security extension Status in neutron: In Progress Status in openstack-ansible: Confirmed Status in openstack-ansible liberty series: Fix Released Status in openstack-ansible trunk series: Confirmed Bug description: After updating to liberty from kilo all networks created in kilo release are useless in liberty. If i try to spawn a new isntance with a port on a network created in kilo i get the following error in nova-compute.log : BadRequest: Port does not have port security binding. I guess this has to do with the new extension in ml2 plugin port_security. Using neutron DVR on Ubuntu 14.04.3! This is my first bug report so sry in advance for any mistakes. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1509312/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller
** Changed in: openstack-ansible/trunk Milestone: mitaka-2 => mitaka-3 ** No longer affects: openstack-ansible/liberty -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1443421 Title: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller Status in neutron: In Progress Status in openstack-ansible: Confirmed Status in openstack-ansible trunk series: Confirmed Bug description: Using multiple api_workers, for "nova live-migration" command, a) tunnel flows and tunnel ports are always removed from old host b) and other hosts(sometimes) not getting notification about port delete from old host. So in other hosts, tunnel ports and flood flows(except unicast flow about port) for old host still remain. Root cause and fix is explained in comments 12 and 13. According to bug reporter, this bug can also be reproducible like below. Setup : Neutron server HA (3 nodes). Hypervisor – ESX with OVsvapp l2 POP is on Network node and off on Ovsvapp. Condition: Make L2 pop on OVs agent, api workers =10 in the controller. On network node,the VXLAN tunnel is created with ESX2 and the Tunnel with ESX1 is not removed after migrating VM from ESX1 to ESX2. Attaching the logs of servers and agent logs. stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show 662d03fb-c784-498e-927c-410aa6788455 Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth2" Interface "eth2" Port br-ex Interface br-ex type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-6447007a" Interface "vxlan-6447007a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} This should have been deleted after MIGRATION. Port "vxlan-64470082" Interface "vxlan-64470082" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"} Port br-tun Interface br-tun type: internal Port "vxlan-6447002a" Interface "vxlan-6447002a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"} Bridge "br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Port "phy-br-eth1" Interface "phy-br-eth1" type: patch options: {peer="int-br-eth1"} Bridge br-int fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "int-br-eth1" Interface "int-br-eth1" type: patch options: {peer="phy-br-eth1"} Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap9515e5b3-ec" tag: 11 Interface "tap9515e5b3-ec" type: internal ovs_version: "2.0.2" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521793] Re: l3ha with L2pop disabled breaks neutron
** Changed in: openstack-ansible/liberty Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521793 Title: l3ha with L2pop disabled breaks neutron Status in neutron: New Status in openstack-ansible: Fix Released Status in openstack-ansible liberty series: Fix Released Status in openstack-ansible trunk series: Fix Released Bug description: when using l3ha the system will fail to build a vm if L2 population is disabled under most circumstances. To resolve this issue the variable `neutron_l2_population` should be set to "true" by default. The current train of thought was that we'd use L3HA by default however due to current differences in the neutron linux bridge agent it seems that is impossible and will require additional upstream work within neutron. In the near term we should re-enable l2 pop by default and effectively disable the built in L3HA. This issue was reported in the channel by @Ville Vuorinen (IRC: kysse), see http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2015-12-01.log.html from 18:47 onwards. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1521793/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population
** Changed in: openstack-ansible Status: New => Confirmed ** Changed in: openstack-ansible Importance: Undecided => Medium ** Also affects: openstack-ansible/liberty Importance: Undecided Status: New ** Also affects: openstack-ansible/trunk Importance: Medium Status: Confirmed ** Changed in: openstack-ansible/liberty Status: New => Confirmed ** Changed in: openstack-ansible/liberty Importance: Undecided => Medium ** Changed in: openstack-ansible/liberty Milestone: None => 12.1.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1523031 Title: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population Status in neutron: New Status in openstack-ansible: Confirmed Status in openstack-ansible liberty series: Confirmed Status in openstack-ansible trunk series: Confirmed Bug description: Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor table (ip neigh show) on the compute node lacks an entry for the router IP address. For example, using a router with 172.16.1.1 and instance with 172.16.1.4: On the node with the L3 agent containing the router: # ip neigh show 169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT 10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE 10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data. ... On the node with the instance: # ip neigh show 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE 10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE 10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE 172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the instance from within both DHCP agent namespaces. On the node with the instance, I manually add a neighbor entry for the router: # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466 nud permanent On the node with the L3 agent containing the router: # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms 64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms 64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller
** Changed in: openstack-ansible Status: New => Confirmed ** Changed in: openstack-ansible Importance: Undecided => High ** Changed in: openstack-ansible Milestone: None => mitaka-2 ** Also affects: openstack-ansible/liberty Importance: Undecided Status: New ** Also affects: openstack-ansible/trunk Importance: High Status: Confirmed ** Changed in: openstack-ansible/liberty Importance: Undecided => High ** Changed in: openstack-ansible/liberty Status: New => Confirmed ** Changed in: openstack-ansible/liberty Milestone: None => 12.1.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1443421 Title: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller Status in neutron: In Progress Status in openstack-ansible: Confirmed Status in openstack-ansible liberty series: Confirmed Status in openstack-ansible trunk series: Confirmed Bug description: Setup : Neutron server HA (3 nodes). Hypervisor – ESX with OVsvapp l2 POP is on Network node and off on Ovsvapp. Condition: Make L2 pop on OVs agent, api workers =10 in the controller. On network node,the VXLAN tunnel is created with ESX2 and the Tunnel with ESX1 is not removed after migrating VM from ESX1 to ESX2. Attaching the logs of servers and agent logs. stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show 662d03fb-c784-498e-927c-410aa6788455 Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth2" Interface "eth2" Port br-ex Interface br-ex type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-6447007a" Interface "vxlan-6447007a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} This should have been deleted after MIGRATION. Port "vxlan-64470082" Interface "vxlan-64470082" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"} Port br-tun Interface br-tun type: internal Port "vxlan-6447002a" Interface "vxlan-6447002a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"} Bridge "br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Port "phy-br-eth1" Interface "phy-br-eth1" type: patch options: {peer="int-br-eth1"} Bridge br-int fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "int-br-eth1" Interface "int-br-eth1" type: patch options: {peer="phy-br-eth1"} Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap9515e5b3-ec" tag: 11 Interface "tap9515e5b3-ec" type: internal ovs_version: "2.0.2" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population
** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Milestone: None => mitaka-2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1523031 Title: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population Status in neutron: New Status in openstack-ansible: New Bug description: Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor table (ip neigh show) on the compute node lacks an entry for the router IP address. For example, using a router with 172.16.1.1 and instance with 172.16.1.4: On the node with the L3 agent containing the router: # ip neigh show 169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT 10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE 10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data. ... On the node with the instance: # ip neigh show 172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT 10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY 172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT 10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE 10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE 10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE 10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE 10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE 10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE 172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the instance from within both DHCP agent namespaces. On the node with the instance, I manually add a neighbor entry for the router: # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466 nud permanent On the node with the L3 agent containing the router: # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4 64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms 64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms 64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller
** Also affects: openstack-ansible Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1443421 Title: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller Status in neutron: In Progress Status in openstack-ansible: New Bug description: Setup : Neutron server HA (3 nodes). Hypervisor – ESX with OVsvapp l2 POP is on Network node and off on Ovsvapp. Condition: Make L2 pop on OVs agent, api workers =10 in the controller. On network node,the VXLAN tunnel is created with ESX2 and the Tunnel with ESX1 is not removed after migrating VM from ESX1 to ESX2. Attaching the logs of servers and agent logs. stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show 662d03fb-c784-498e-927c-410aa6788455 Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth2" Interface "eth2" Port br-ex Interface br-ex type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-6447007a" Interface "vxlan-6447007a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} This should have been deleted after MIGRATION. Port "vxlan-64470082" Interface "vxlan-64470082" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"} Port br-tun Interface br-tun type: internal Port "vxlan-6447002a" Interface "vxlan-6447002a" type: vxlan options: {df_default="true", in_key=flow, local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"} Bridge "br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Port "phy-br-eth1" Interface "phy-br-eth1" type: patch options: {peer="int-br-eth1"} Bridge br-int fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "int-br-eth1" Interface "int-br-eth1" type: patch options: {peer="phy-br-eth1"} Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap9515e5b3-ec" tag: 11 Interface "tap9515e5b3-ec" type: internal ovs_version: "2.0.2" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521793] Re: Master/Liberty w/ L2pop disabled breaks neutron
** Also affects: neutron Importance: Undecided Status: New ** Summary changed: - Master/Liberty w/ L2pop disabled breaks neutron + l3ha with L2pop disabled breaks neutron ** Description changed: when using l3ha the system will fail to build a vm if L2 population is disabled under most circumstances. To resolve this issue the variable `neutron_l2_population` should be set to "true" by default. The current train of thought was that we'd use L3HA by default however due to current differences in the neutron linux bridge agent it seems that is impossible and will require additional upstream work within neutron. In the near term we should re-enable l2 pop by default and effectively disable the built in L3HA. - This issue was reported in the channel by @Ville Vuorinen (IRC: kysse) + This issue was reported in the channel by @Ville Vuorinen (IRC: kysse), + see http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2015-12-01.log.html from 18:47 onwards. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521793 Title: l3ha with L2pop disabled breaks neutron Status in neutron: New Status in openstack-ansible: In Progress Status in openstack-ansible liberty series: Triaged Status in openstack-ansible trunk series: In Progress Bug description: when using l3ha the system will fail to build a vm if L2 population is disabled under most circumstances. To resolve this issue the variable `neutron_l2_population` should be set to "true" by default. The current train of thought was that we'd use L3HA by default however due to current differences in the neutron linux bridge agent it seems that is impossible and will require additional upstream work within neutron. In the near term we should re-enable l2 pop by default and effectively disable the built in L3HA. This issue was reported in the channel by @Ville Vuorinen (IRC: kysse), see http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2015-12-01.log.html from 18:47 onwards. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1521793/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade
** Changed in: openstack-ansible/liberty Status: Fix Committed => Fix Released ** Changed in: openstack-ansible/trunk Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1476770 Title: _translate_from_glance fails with "AttributeError: id" in grenade Status in Glance: Invalid Status in keystonemiddleware: Fix Released Status in openstack-ansible: Fix Released Status in openstack-ansible kilo series: Fix Released Status in openstack-ansible liberty series: Fix Released Status in openstack-ansible trunk series: Fix Released Status in OpenStack-Gate: Fix Committed Status in oslo.vmware: Fix Released Status in python-glanceclient: In Progress Bug description: http://logs.openstack.org/28/204128/2/check/gate-grenade- dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE 2015-07-21 17:05:37.447 ERROR nova.api.openstack [req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 tempest-ServersTestJSON-745803609] Caught error: id 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent call last): 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return req.get_response(self.application) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, catch_exc_info=False) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in call_application 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 634, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._call_app(env, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 554, in _call_app 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._app(env, _fake_start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = self.app(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = self.call_func(req, *args, **self.kwargs) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self.func(req, *args, **kwargs) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, body, accept) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = self.dispatch(meth, request, action_args) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 911, in dispatch 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return method(req=request, **action_args)
[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0
** Changed in: openstack-ansible/liberty Status: Fix Committed => Fix Released ** Changed in: openstack-ansible/trunk Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1505326 Title: Unit tests failing with requests 2.8.0 Status in OpenStack Identity (keystone): Invalid Status in openstack-ansible: Fix Released Status in openstack-ansible kilo series: Fix Released Status in openstack-ansible liberty series: Fix Released Status in openstack-ansible trunk series: Fix Released Bug description: When the tests are run, a bunch of them fail: pkg_resources.ContextualVersionConflict: (requests 2.8.0 (/home/jenkins/workspace/gate-keystone- python27/.tox/py27/lib/python2.7/site-packages), Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy'])) global-requirements has requests!=2.8.0 , but something must be pulling in that version of requests! To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade
** Changed in: openstack-ansible/kilo Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1476770 Title: _translate_from_glance fails with "AttributeError: id" in grenade Status in Glance: Invalid Status in keystonemiddleware: Fix Released Status in openstack-ansible: Fix Committed Status in openstack-ansible kilo series: Fix Released Status in openstack-ansible liberty series: Fix Committed Status in openstack-ansible trunk series: Fix Committed Status in OpenStack-Gate: Fix Committed Status in oslo.vmware: Fix Released Status in python-glanceclient: In Progress Bug description: http://logs.openstack.org/28/204128/2/check/gate-grenade- dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE 2015-07-21 17:05:37.447 ERROR nova.api.openstack [req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 tempest-ServersTestJSON-745803609] Caught error: id 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent call last): 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return req.get_response(self.application) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, catch_exc_info=False) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in call_application 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 634, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._call_app(env, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 554, in _call_app 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._app(env, _fake_start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = self.app(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = self.call_func(req, *args, **self.kwargs) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self.func(req, *args, **kwargs) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, body, accept) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = self.dispatch(meth, request, action_args) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 911, in dispatch 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return method(req=request, **action_args) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File
[Yahoo-eng-team] [Bug 1515485] Re: Heat CFN signals do not pass authorization
** Changed in: openstack-ansible/kilo Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1515485 Title: Heat CFN signals do not pass authorization Status in OpenStack Identity (keystone): Invalid Status in OpenStack Identity (keystone) kilo series: Fix Committed Status in openstack-ansible: Invalid Status in openstack-ansible kilo series: Fix Released Status in openstack-ansible liberty series: Invalid Status in openstack-ansible trunk series: Invalid Bug description: Note that this bug applies to the Kilo release. Master does not appear to have this problem. I did not test liberty yet. Heat templates that rely on CFN signals timeout because the API calls that execute these signals return 403 errors. Heat signals, on the other side, do work. The problem was reported to me by Alex Cantu. I have verified it on his multinode lab and have also reproduced on my own single-node system hosted on a public cloud server. I suspect liberty/master avoided the problem after Jesse and I reworked the Heat configuration to use Keystone v3 the last day before the L release. Example template, which can be executed in an AIO after running the tempest playbook: heat_template_version: 2013-05-23 resources: wait_condition: type: AWS::CloudFormation::WaitCondition properties: Handle: { get_resource: wait_handle } Count: 1 Timeout: 600 wait_handle: type: AWS::CloudFormation::WaitConditionHandle my_instance: type: OS::Nova::Server properties: image: cirros flavor: m1.tiny networks: - network: "private" user_data_format: RAW user_data: str_replace: template: | #!/bin/sh echo "wc_notify" curl -H "Content-Type:" -X PUT wc_notify --data-binary '{"status": "SUCCESS"}' params: wc_notify: { get_resource: wait_handle } This template should end very quickly, as it starts a cirros instance that just sends a signal back to heat. But instead, it timeouts. The user data script dumps the signal URL to the console log, if you then try to send the signal manually you will get a 403. The original 403 can also be seen in the heat-api-cfn.log file. Here is the log snippet: 2015-11-12 05:13:34.491 1862 INFO heat.api.aws.ec2token [-] Checking AWS credentials.. 2015-11-12 05:13:34.492 1862 INFO heat.api.aws.ec2token [-] AWS credentials found, checking against keystone. 2015-11-12 05:13:34.493 1862 INFO heat.api.aws.ec2token [-] Authenticating with http://172.29.236.100:5000/v3/ec2tokens 2015-11-12 05:13:34.533 1862 INFO heat.api.aws.ec2token [-] AWS authentication failure. 2015-11-12 05:13:34.534 1862 INFO eventlet.wsgi.server [-] 10.0.3.181,172.29.236.100 - - [12/Nov/2015 05:13:34] "PUT /v1/waitcondition/arn%3Aopenstack%3Aheat%3A%3A683acadf4d04489f8e991b44014e6fc1%3Astacks%2Fwc1%2Faa4083b6-ce6c-411f-9df9-d059abacf40c%2Fresources%2Fwait_handle?Timestamp=2015-11-12T05%3A12%3A27Z=HmacSHA256=65657d1021e24e49ba4fb6f217ca4a22=2=aCG%2FO04MNLzSlf5gIBGw1hMcC7bQzB3pZXVKzXLLNSo%3D HTTP/1.1" 403 301 0.043961 For reference, the curl command to trigger the signal is: curl -H "Content-Type:" -X PUT "
[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade
With keystonemiddleware 1.5.3 tagged, this will be included automatically with the next tagged releases of OpenStack-Ansible. Verified in Kilo with a recent build result: http://logs.openstack.org/57/248557/2/gate/gate-openstack-ansible-dsvm-commit/de13bfd/console.html#_2015-11-26_15_30_45_573 ** Also affects: openstack-ansible/kilo Importance: Undecided Status: New ** Also affects: openstack-ansible/liberty Importance: Undecided Status: New ** Also affects: openstack-ansible/trunk Importance: High Assignee: Jesse Pretorius (jesse-pretorius) Status: In Progress ** Changed in: openstack-ansible/trunk Milestone: 12.1.0 => mitaka-1 ** Changed in: openstack-ansible/liberty Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible/kilo Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible/kilo Milestone: None => 11.2.6 ** Changed in: openstack-ansible/kilo Status: New => Fix Committed ** Changed in: openstack-ansible/kilo Importance: Undecided => High ** Changed in: openstack-ansible/liberty Importance: Undecided => High ** Changed in: openstack-ansible/liberty Milestone: None => 12.0.2 ** Changed in: openstack-ansible/liberty Status: New => Fix Committed ** Changed in: openstack-ansible/trunk Status: In Progress => Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1476770 Title: _translate_from_glance fails with "AttributeError: id" in grenade Status in Glance: Invalid Status in keystonemiddleware: Fix Released Status in openstack-ansible: Fix Committed Status in openstack-ansible kilo series: Fix Committed Status in openstack-ansible liberty series: Fix Committed Status in openstack-ansible trunk series: Fix Committed Status in OpenStack-Gate: Fix Committed Status in oslo.vmware: Fix Released Status in python-glanceclient: In Progress Bug description: http://logs.openstack.org/28/204128/2/check/gate-grenade- dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE 2015-07-21 17:05:37.447 ERROR nova.api.openstack [req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 tempest-ServersTestJSON-745803609] Caught error: id 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent call last): 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return req.get_response(self.application) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, catch_exc_info=False) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in call_application 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 634, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._call_app(env, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 554, in _call_app 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._app(env, _fake_start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = self.app(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib
[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0
** Also affects: openstack-ansible/liberty Importance: Undecided Status: New ** Also affects: openstack-ansible/kilo Importance: Undecided Status: New ** Also affects: openstack-ansible/trunk Importance: High Assignee: Jesse Pretorius (jesse-pretorius) Status: In Progress ** Changed in: openstack-ansible/trunk Milestone: 12.1.0 => mitaka-1 ** Changed in: openstack-ansible/liberty Milestone: None => 12.0.2 ** Changed in: openstack-ansible/kilo Milestone: None => 11.2.6 ** Changed in: openstack-ansible/liberty Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible/kilo Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible/liberty Importance: Undecided => High ** Changed in: openstack-ansible/kilo Importance: Undecided => High ** Changed in: openstack-ansible/liberty Status: New => In Progress ** Changed in: openstack-ansible/kilo Status: New => Fix Committed ** Changed in: openstack-ansible/trunk Status: In Progress => Fix Committed ** Changed in: openstack-ansible/liberty Status: In Progress => Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1505326 Title: Unit tests failing with requests 2.8.0 Status in OpenStack Identity (keystone): Invalid Status in openstack-ansible: Fix Committed Status in openstack-ansible kilo series: Fix Committed Status in openstack-ansible liberty series: Fix Committed Status in openstack-ansible trunk series: Fix Committed Bug description: When the tests are run, a bunch of them fail: pkg_resources.ContextualVersionConflict: (requests 2.8.0 (/home/jenkins/workspace/gate-keystone- python27/.tox/py27/lib/python2.7/site-packages), Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy'])) global-requirements has requests!=2.8.0 , but something must be pulling in that version of requests! To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1505295] Re: Tox tests failing with AttributeError
** Changed in: openstack-ansible Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1505295 Title: Tox tests failing with AttributeError Status in Cinder: Fix Committed Status in Designate: Fix Committed Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in openstack-ansible: Fix Released Bug description: Currently all tests run in Jenkins python27 and python34 are failing with an AttributeError, saying that "'str' has no attribute 'DEALER'", as well as an AssertionError on assert TRANSPORT is not None in cinder/rpc.py. An example of the full traceback of the failure can be found here: http://paste.openstack.org/show/476040/ To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1505295/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1515485] Re: Heat CFN signals do not pass authorization
** Also affects: openstack-ansible/trunk Importance: Medium Status: Triaged ** Also affects: openstack-ansible/kilo Importance: Undecided Status: New ** Also affects: openstack-ansible/liberty Importance: Undecided Status: New ** Changed in: openstack-ansible/trunk Status: Triaged => Invalid ** Changed in: openstack-ansible/liberty Status: New => Invalid ** Changed in: openstack-ansible/kilo Status: New => In Progress ** Changed in: openstack-ansible/kilo Status: In Progress => Triaged ** Changed in: openstack-ansible/kilo Milestone: None => 11.2.5 ** Changed in: openstack-ansible/kilo Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible/trunk Milestone: 11.2.5 => None ** Changed in: openstack-ansible/kilo Milestone: 11.2.5 => 11.2.6 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1515485 Title: Heat CFN signals do not pass authorization Status in OpenStack Identity (keystone): Invalid Status in OpenStack Identity (keystone) kilo series: Incomplete Status in openstack-ansible: Invalid Status in openstack-ansible kilo series: Triaged Status in openstack-ansible liberty series: Invalid Status in openstack-ansible trunk series: Invalid Bug description: Note that this bug applies to the Kilo release. Master does not appear to have this problem. I did not test liberty yet. Heat templates that rely on CFN signals timeout because the API calls that execute these signals return 403 errors. Heat signals, on the other side, do work. The problem was reported to me by Alex Cantu. I have verified it on his multinode lab and have also reproduced on my own single-node system hosted on a public cloud server. I suspect liberty/master avoided the problem after Jesse and I reworked the Heat configuration to use Keystone v3 the last day before the L release. Example template, which can be executed in an AIO after running the tempest playbook: heat_template_version: 2013-05-23 resources: wait_condition: type: AWS::CloudFormation::WaitCondition properties: Handle: { get_resource: wait_handle } Count: 1 Timeout: 600 wait_handle: type: AWS::CloudFormation::WaitConditionHandle my_instance: type: OS::Nova::Server properties: image: cirros flavor: m1.tiny networks: - network: "private" user_data_format: RAW user_data: str_replace: template: | #!/bin/sh echo "wc_notify" curl -H "Content-Type:" -X PUT wc_notify --data-binary '{"status": "SUCCESS"}' params: wc_notify: { get_resource: wait_handle } This template should end very quickly, as it starts a cirros instance that just sends a signal back to heat. But instead, it timeouts. The user data script dumps the signal URL to the console log, if you then try to send the signal manually you will get a 403. The original 403 can also be seen in the heat-api-cfn.log file. Here is the log snippet: 2015-11-12 05:13:34.491 1862 INFO heat.api.aws.ec2token [-] Checking AWS credentials.. 2015-11-12 05:13:34.492 1862 INFO heat.api.aws.ec2token [-] AWS credentials found, checking against keystone. 2015-11-12 05:13:34.493 1862 INFO heat.api.aws.ec2token [-] Authenticating with http://172.29.236.100:5000/v3/ec2tokens 2015-11-12 05:13:34.533 1862 INFO heat.api.aws.ec2token [-] AWS authentication failure. 2015-11-12 05:13:34.534 1862 INFO eventlet.wsgi.server [-] 10.0.3.181,172.29.236.100 - - [12/Nov/2015 05:13:34] "PUT /v1/waitcondition/arn%3Aopenstack%3Aheat%3A%3A683acadf4d04489f8e991b44014e6fc1%3Astacks%2Fwc1%2Faa4083b6-ce6c-411f-9df9-d059abacf40c%2Fresources%2Fwait_handle?Timestamp=2015-11-12T05%3A12%3A27Z=HmacSHA256=65657d1021e24e49ba4fb6f217ca4a22=2=aCG%2FO04MNLzSlf5gIBGw1hMcC7bQzB3pZXVKzXLLNSo%3D HTTP/1.1" 403 301 0.043961 For reference, the curl command to trigger the signal is: curl -H "Content-Type:" -X PUT "<cfn-signal-url". To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1515485/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1505153] Re: gates broken by WebOb 1.5 release
** Changed in: openstack-ansible Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1505153 Title: gates broken by WebOb 1.5 release Status in Cinder: Fix Released Status in Manila: Fix Released Status in OpenStack Compute (nova): Fix Released Status in openstack-ansible: Fix Released Bug description: Hi, WebOb 1.5 was released yesterday. test_misc of Cinder starts failing with this release. I wrote this simple fix which should be enough to repair it: https://review.openstack.org/233528 "Fix test_misc for WebOb 1.5" class ConvertedException(webob.exc.WSGIHTTPException): -def __init__(self, code=0, title="", explanation=""): +def __init__(self, code=500, title="", explanation=""): Victor To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1505153/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1505677] Re: oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-conductor log
** Changed in: openstack-ansible Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1505677 Title: oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova- conductor log Status in OpenStack Compute (nova): Fix Released Status in openstack-ansible: Fix Released Status in oslo.versionedobjects: Fix Released Bug description: In nova-conductor we're seeing the following error for stable/liberty: 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher executor_callback)) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher executor_callback) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, in object_class_action_versions 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, objname, objmethod, object_versions, args, kwargs) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 477, in object_class_action_versions 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher if isinstance(result, nova_object.NovaObject) else result) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 535, in obj_to_primitive 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher version_manifest) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 507, in obj_make_compatible_from_manifest 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher return self.obj_make_compatible(primitive, target_version) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/objects/instance.py", line 1325, in obj_make_compatible 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher target_version) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/objects/base.py", line 262, in obj_make_compatible 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher rel_versions = self.obj_relationships['objects'] 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher KeyError: 'objects' 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher More details here: http://logs.openstack.org/56/233756/8/check/gate-openstack-ansible-dsvm-commit/879f745/logs/aio1_nova_conductor_container-5ec67682/nova-conductor.log To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1505677/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1505677] [NEW] oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-conductor log
Public bug reported: In nova-conductor we're seeing the following error for stable/liberty: 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher executor_callback)) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher executor_callback) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, in object_class_action_versions 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, objname, objmethod, object_versions, args, kwargs) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 477, in object_class_action_versions 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher if isinstance(result, nova_object.NovaObject) else result) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 535, in obj_to_primitive 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher version_manifest) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 507, in obj_make_compatible_from_manifest 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher return self.obj_make_compatible(primitive, target_version) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/objects/instance.py", line 1325, in obj_make_compatible 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher target_version) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/objects/base.py", line 262, in obj_make_compatible 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher rel_versions = self.obj_relationships['objects'] 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher KeyError: 'objects' 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher More details here: http://logs.openstack.org/56/233756/8/check/gate-openstack-ansible-dsvm-commit/879f745/logs/aio1_nova_conductor_container-5ec67682/nova-conductor.log ** Affects: nova Importance: Undecided Status: New ** Affects: openstack-ansible Importance: Critical Assignee: Jesse Pretorius (jesse-pretorius) Status: Confirmed ** Affects: oslo.versionedobjects Importance: Undecided Status: New ** Also affects: nova Importance: Undecided Status: New ** Also affects: oslo.versionedobjects Importance: Undecided Status: New ** Description changed: In nova-conductor we're seeing the following error for stable/liberty: 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher executor_callback)) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher executor_callback) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, in object_class_action_versions 2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, objname, objmethod, object_versions, args, kwargs)
[Yahoo-eng-team] [Bug 1505295] Re: Tox tests failing with AttributeError
** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Milestone: None => 12.0.0 ** Changed in: openstack-ansible Importance: Undecided => High ** Changed in: openstack-ansible Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1505295 Title: Tox tests failing with AttributeError Status in Cinder: New Status in neutron: New Status in openstack-ansible: In Progress Status in oslo.messaging: New Bug description: Currently all tests run in Jenkins python27 and python34 are failing with an AttributeError, saying that "'str' has no attribute 'DEALER'", as well as an AssertionError on assert TRANSPORT is not None in cinder/rpc.py. An example of the full traceback of the failure can be found here: http://paste.openstack.org/show/476040/ To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1505295/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade
** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible Importance: Undecided => High ** Changed in: openstack-ansible Status: New => In Progress ** Changed in: openstack-ansible Milestone: None => 12.0.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1476770 Title: _translate_from_glance fails with "AttributeError: id" in grenade Status in Glance: Invalid Status in openstack-ansible: In Progress Status in OpenStack-Gate: Fix Committed Status in oslo.vmware: Fix Released Status in python-glanceclient: New Bug description: http://logs.openstack.org/28/204128/2/check/gate-grenade- dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE 2015-07-21 17:05:37.447 ERROR nova.api.openstack [req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 tempest-ServersTestJSON-745803609] Caught error: id 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent call last): 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return req.get_response(self.application) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, catch_exc_info=False) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in call_application 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 634, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._call_app(env, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 554, in _call_app 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self._app(env, _fake_start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = self.app(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, start_response) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = self.call_func(req, *args, **self.kwargs) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return self.func(req, *args, **kwargs) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__ 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, body, accept) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = self.dispatch(meth, request, action_args) 2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack File "/opt/stack/old/nova/nova/a
[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0
** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Status: New => In Progress ** Changed in: openstack-ansible Importance: Undecided => High ** Changed in: openstack-ansible Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible Milestone: None => 12.0.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1505326 Title: Unit tests failing with requests 2.8.0 Status in Keystone: Confirmed Status in openstack-ansible: In Progress Bug description: When the tests are run, a bunch of them fail: pkg_resources.ContextualVersionConflict: (requests 2.8.0 (/home/jenkins/workspace/gate-keystone- python27/.tox/py27/lib/python2.7/site-packages), Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy'])) global-requirements has requests!=2.8.0 , but something must be pulling in that version of requests! To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1505153] Re: gates broken by WebOb 1.5 release
** Also affects: openstack-ansible Importance: Undecided Status: New ** Changed in: openstack-ansible Status: New => In Progress ** Changed in: openstack-ansible Importance: Undecided => High ** Changed in: openstack-ansible Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius) ** Changed in: openstack-ansible Milestone: None => 12.0.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1505153 Title: gates broken by WebOb 1.5 release Status in Cinder: Fix Committed Status in Manila: In Progress Status in OpenStack Compute (nova): In Progress Status in openstack-ansible: In Progress Bug description: Hi, WebOb 1.5 was released yesterday. test_misc of Cinder starts failing with this release. I wrote this simple fix which should be enough to repair it: https://review.openstack.org/233528 "Fix test_misc for WebOb 1.5" class ConvertedException(webob.exc.WSGIHTTPException): -def __init__(self, code=0, title="", explanation=""): +def __init__(self, code=500, title="", explanation=""): Victor To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1505153/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1440762] Re: Rebuild an instance with attached volume fails
** No longer affects: openstack-ansible -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1440762 Title: Rebuild an instance with attached volume fails Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) juno series: In Progress Status in OpenStack Compute (nova) kilo series: In Progress Bug description: When trying to rebuild an instance with attached volume, it fails with the errors: 2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher libvirtError: Failed to terminate process 22913 with SIGKILL: Device or resource busy 2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher 180Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host stats, it is trying to get disk info for instance-0003, but the backing volume block device was removed by concurrent operations such as resize. Error: No volume Block Device Mapping at path: /dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1 182Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event) The full log of rebuild process is here: http://paste.openstack.org/show/166892/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1440762/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1470635] Re: endpoints added with v3 are not visible with v2
I can confirm that this is a problem, and I agree that endpoints created using the v3 api really should be available via the v2 api. ** Changed in: keystone Status: New = Confirmed ** Also affects: openstack-ansible Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1470635 Title: endpoints added with v3 are not visible with v2 Status in OpenStack Identity (Keystone): Confirmed Status in Ansible playbooks for deploying OpenStack: New Status in Puppet module for Keystone: Confirmed Bug description: Create an endpoint with v3:: # openstack --os-identity-api-version 3 [--admin credentials] endpoint create try to list endpoints with v2:: # openstack --os-identity-api-version 2 [--admin credentials] endpoint list nothing. We are in the process of trying to convert puppet-keystone to v3 with the goal of maintaining backwards compatibility. That means, we want admins/operators not to have to change any existing workflow. This bug causes openstack endpoint list to return nothing which breaks existing workflows and backwards compatibility. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1470635/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1471289] [NEW] Fernet tokens and Federated Identities result in token scope failures
Public bug reported: When keystone is configured to use fernet tokens and also configured to be a SP for an external IDP then the token data received by nova and other services appear to not contain the right information, resulting in errors from nova-api-os-compute such as: Returning 400 to user: Malformed request URL: URL's project_id '69f5cff441e04554b285d7772630dec1' doesn't match Context's project_id 'None' When keystone is switched to use uuid tokens, then everything works as expected. Further debugging of the request to the nova api shows: 'HTTP_X_USER_DOMAIN_NAME': None, 'HTTP_X_DOMAIN_ID': None, 'HTTP_X_PROJECT_DOMAIN_ID': None, 'HTTP_X_ROLES': '', 'HTTP_X_TENANT_ID': None, 'HTTP_X_PROJECT_DOMAIN_NAME': None, 'HTTP_X_TENANT': None, 'HTTP_X_USER': u'S-1-5-21-2917001131-1385516553-613696311-1108', 'HTTP_X_USER_DOMAIN_ID': None, 'HTTP_X_AUTH_PROJECT_ID': '69f5cff441e04554b285d7772630dec1', 'HTTP_X_DOMAIN_NAME': None, 'HTTP_X_PROJECT_NAME': None, 'HTTP_X_PROJECT_ID': None, 'HTTP_X_USER_NAME': u'S-1-5-21-2917001131-1385516553-613696311-1108' Comparing the interaction of nova-api-os-compute with keystone for the token validation between an internal user and a federated user, the following is seen: ### federated user ### 2015-07-03 14:43:05.229 8103 DEBUG keystoneclient.session [-] REQ: curl -g -i --insecure -X GET https://sp.testenvironment.local:5000/v3/auth/tokens -H X-Subject-Token: {SHA1}acff9b5962270fec270e693eacb4c987c335f5c5 -H User-Agent: python-keystoneclient -H Accept: application/json -H X-Auth-Token: {SHA1}a6a8a70ae39c533379eccd51b6d253f264d59f14 _http_log_request /usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:193 2015-07-03 14:43:05.265 8103 DEBUG keystoneclient.session [-] RESP: [200] content-length: 402 x-subject-token: {SHA1}acff9b5962270fec270e693eacb4c987c335f5c5 vary: X-Auth-Token keep-alive: timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) connection: Keep-Alive date: Fri, 03 Jul 2015 14:43:05 GMT content-type: application/json x-openstack-request-id: req-df3dce71-3174-4753-b883-11eb31a67d7c RESP BODY: {token: {methods: [token], expires_at: 2015-07-04T02:43:04.00Z, extras: {}, user: {OS-FEDERATION: {identity_provider: {id: adfs-idp}, protocol: {id: saml2}, groups: []}, id: S-1-5-21-2917001131-1385516553-613696311-1108, name: S-1-5-21-2917001131-1385516553-613696311-1108}, audit_ids: [_a6BbQ6mSoGAY2u9NN0tFA], issued_at: 2015-07-03T14:43:04.00Z}} ### internal user ### 2015-07-03 14:28:31.875 8103 DEBUG keystoneclient.session [-] REQ: curl -g -i --insecure -X GET https://sp.testenvironment.local:5000/v3/auth/tokens -H X-Subject-Token: {SHA1}b9c6748d65a0492faa9862fabf0a56fd5fdd255d -H User-Agent: python-keystoneclient -H Accept: application/json -H X-Auth-Token: {SHA1}a6a8a70ae39c533379eccd51b6d253f264d59f14 _http_log_request /usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:193 2015-07-03 14:28:31.949 8103 DEBUG keystoneclient.session [-] RESP: [200] content-length: 6691 x-subject-token: {SHA1}b9c6748d65a0492faa9862fabf0a56fd5fdd255d vary: X-Auth-Token keep-alive: timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) connection: Keep-Alive date: Fri, 03 Jul 2015 14:28:31 GMT content-type: application/json x-openstack-request-id: req-6e0ed9f4-46c3-4c79-b444-f72963fc9503 RESP BODY: {token: {methods: [password], roles: [{id: 9fe2ff9ee4384b1894a90878d3e92bab, name: _member_}], expires_at: 2015-07-04T02:28:31.00Z, project: {domain: {id: default, name: Default}, id: 0f491c8551c04cdc804a479af0bf13ec, name: demo}, catalog: removed, extras: {}, user: {domain: {id: default, name: Default}, id: 76c8c3017c954d88a6ad69ee4cb656d6, name: test}, audit_ids: [aAN_V0c6SLSI0Rm1hoScCg], issued_at: 2015-07-03T14:28:31.00Z}} The data structures that come back from keystone are clearly quite different. ### configuration environment ### Ubuntu 14.04 OS nova==12.0.0.0a1.dev51 # commit a4f4be370be06cfc9aa3ed30d2445277e832376f from master branch keystone==8.0.0.0a1.dev12 # commit a7ca13b687dd284f0980d768b11a3d1b52b4106e from master branch python-keystoneclient==1.6.1.dev19 # commit d238cc9af4927d1092de207db978536d712af129 from master branch python-openstackclient==1.5.1.dev11# commit 2d6bc8f4c38dbf997e3e71119f13f0328b4a8669 from master branch python-novaclient==2.26.1.dev25 # commit 3c2ff0faad8c84777ffe7d9946a1bc4486116084 from master branch keystonemiddleware==2.0.0 oslo.concurrency==2.1.0 oslo.config==1.12.1 oslo.context==0.4.0 oslo.db==1.12.0 oslo.i18n==2.0.0 oslo.log==1.5.0 oslo.messaging==1.15.0 oslo.middleware==2.3.0 oslo.policy==0.6.0 oslo.serialization==1.6.0 oslo.utils==1.6.0 Keystone is configured as a Shibboleth SP with a trust relationship between it and an ADFS IdP. The mapping rules are setup as follows - note that the user's default_project_id was added in an attempt to see whether it helped. It does seem to reflect in the HTTP_X_AUTH_PROJECT_ID at the nova api as shown above. [ { local: [
[Yahoo-eng-team] [Bug 1438543] Re: wrong package name 'XStatic-Angular-Irdragndrop' in horizon/requirements.txt
** Also affects: xstatic-angular-irdragndrop Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1438543 Title: wrong package name 'XStatic-Angular-Irdragndrop' in horizon/requirements.txt Status in OpenStack Dashboard (Horizon): Confirmed Status in Xstatic Angular IrDragNDrop: New Bug description: There's a wrong package name 'XStatic-Angular-Irdragndrop' in horizon/requirements.txt. It should be 'XStatic-Angular-lrdragndrop'. uppercase 'l'-rdragndrop instead of uppercase 'i'rdragndrop. This causes devstack fail because there's no such package in pypi. -- 2015-03-31 05:40:50.388 | Could not find any downloads that satisfy the requirement XStatic-Angular-Irdragndrop=1.0.2.1 (from horizon==2015.1.dev110) 2015-03-31 05:40:50.388 | Some externally hosted files were ignored as access to them may be unreliable (use --allow-external XStatic-Angular-Irdragndrop to allow). 2015-03-31 05:40:50.704 | No distributions at all found for XStatic-Angular-Irdragndrop=1.0.2.1 (from horizon==2015.1.dev110) -- and this bug is also logged at redhat https://bugzilla.redhat.com/show_bug.cgi?id=1196957 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1438543/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1407685] Re: New eventlet library breaks nova-manage
** Also affects: openstack-ansible Importance: Undecided Status: New ** Also affects: openstack-ansible/juno Importance: Undecided Status: New ** Also affects: openstack-ansible/icehouse Importance: Undecided Status: New ** Also affects: openstack-ansible/trunk Importance: Undecided Status: New ** Changed in: openstack-ansible/icehouse Importance: Undecided = High ** Changed in: openstack-ansible/juno Importance: Undecided = High ** Changed in: openstack-ansible/trunk Importance: Undecided = High ** Changed in: openstack-ansible/icehouse Milestone: None = next ** Changed in: openstack-ansible/juno Milestone: None = 10.1.2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1407685 Title: New eventlet library breaks nova-manage Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) juno series: Fix Released Status in Ansible playbooks for deploying OpenStack: New Status in openstack-ansible icehouse series: New Status in openstack-ansible juno series: New Status in openstack-ansible trunk series: New Bug description: This only affects stable/juno and stable/icehouse, which still use the deprecated eventlet.util module: ~# nova-manage service list 2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] Could not load 'file': cannot import name util 2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] cannot import name util 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension Traceback (most recent call last): 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension File /opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py, line 162, in _load_plugins 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension verify_requirements, 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension File /opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py, line 178, in _load_one_plugin 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension plugin = ep.load(require=verify_requirements) 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension File /opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py, line 2306, in load 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension return self._load() 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension File /opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py, line 2309, in _load 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension module = __import__(self.module_name, fromlist=['__name__'], level=0) 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension File /opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/image/download/file.py, line 23, in module 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension import nova.virt.libvirt.utils as lv_utils 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension File /opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py, line 15, in module 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from nova.virt.libvirt import driver 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension File /opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 59, in module 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from eventlet import util as eventlet_util 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension ImportError: cannot import name util 2015-01-05 13:13:11.202 29016 TRACE stevedore.extension To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1407685/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1276639] [NEW] block live migration does not work when a volume is attached
Public bug reported: Environment: - Two compute nodes, running Ubuntu 12.04 LTS - KVM Hypervisor - Ceph (dumpling) back-end for Cinder - Grizzly-level Openstack Steps to reproduce: 1) Create instance and volume 2) Attach volume to instance 3) Attempt a block migration between compute nodes - eg: nova live-migration --block-migrate 9b85b983-dced-4574-b14c-c72e4d92982a Packages: ii ceph 0.67.5-1precise ii ceph-common 0.67.5-1precise ii ceph-fs-common 0.67.5-1precise ii ceph-fuse0.67.5-1precise ii ceph-mds 0.67.5-1precise ii curl 7.29.0-1precise.ceph ii kvm 1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu14.13 ii kvm-ipxe 1.0.0+git-3.55f6c88-0ubuntu1 ii libcephfs1 0.67.5-1precise ii libcurl3 7.29.0-1precise.ceph ii libcurl3-gnutls 7.29.0-1precise.ceph ii libleveldb1 1.12.0-1precise.ceph ii nova-common 1:2013.1.4-0ubuntu1~cloud0 ii nova-compute 1:2013.1.4-0ubuntu1~cloud0 ii nova-compute-kvm 1:2013.1.4-0ubuntu1~cloud0 ii python-ceph 0.67.5-1precise ii python-cinderclient 1:1.0.3-0ubuntu1~cloud0 ii python-nova 1:2013.1.4-0ubuntu1~cloud0 ii python-novaclient1:2.13.0-0ubuntu1~cloud0 ii qemu-common 1.0+noroms-0ubuntu14.13 ii qemu-kvm 1.0+noroms-0ubuntu14.13 ii qemu-utils 1.0+noroms-0ubuntu14.13 ii libvirt-bin 1.0.2-0ubuntu11.13.04.5~cloud1 ii libvirt0 1.0.2-0ubuntu11.13.04.5~cloud1 ii python-libvirt 1.0.2-0ubuntu11.13.04.5~cloud1 /var/log/nova/nova-compute on source: 2014-02-05 16:36:46.014 998 INFO nova.compute.manager [-] Lifecycle event 2 on VM 9b85b983-dced-4574-b14c-c72e4d92982a 2014-02-05 16:36:46.233 998 INFO nova.compute.manager [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has a pending task. Skip. 2014-02-05 16:36:46.234 998 INFO nova.compute.manager [-] Lifecycle event 2 on VM 9b85b983-dced-4574-b14c-c72e4d92982a 2014-02-05 16:36:46.468 998 INFO nova.compute.manager [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has a pending task. Skip. 2014-02-05 16:41:09.029 998 INFO nova.compute.manager [-] Lifecycle event 1 on VM 9b85b983-dced-4574-b14c-c72e4d92982a 2014-02-05 16:41:09.265 998 INFO nova.compute.manager [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has a pending task. Skip. 2014-02-05 16:41:09.640 998 ERROR nova.virt.libvirt.driver [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] Live Migration failure: Unable to read from monitor: Connection reset by peer 2014-02-05 16:41:12.165 998 WARNING nova.compute.manager [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] Instance shutdown by itself. Calling the stop API. 2014-02-05 16:41:12.398 998 INFO nova.virt.libvirt.driver [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] Instance destroyed successfully. /var/log/libvirt/libvirtd.log on source: 2014-02-05 14:41:07.607+: 3437: error : qemuMonitorIORead:502 : Unable to read from monitor: Connection reset by peer 2014-02-05 14:41:09.633+: 3441: error : virNetClientProgramDispatchError:175 : An error occurred, but the cause is unknown 2014-02-05 14:41:09.634+: 3441: error : qemuDomainObjEnterMonitorInternal:997 : operation failed: domain is no longer running 2014-02-05 14:41:09.634+: 3441: warning : doPeer2PeerMigrate3:2872 : Guest instance-0315 probably left in 'paused' state on source /var/log/nova/nova-compute.log on target: 2014-02-05 16:36:38.841 INFO nova.virt.libvirt.driver [req-0f0eaabf-9e29-4d45-88c9-20194be51d49 aaf3e92b69e04958b43348677ab7b38b 1859d80f51ff4180b591f7fe2668fd68] Instance launched has CPU info: {vendor: Intel, model: SandyBridge, arch: x86_64, features: [pdpe1gb, osxsave, dca, pcid, pdcm, xtpr, tm2, est, smx, vmx, ds_cpl, monitor, dtes64, pbe, tm, ht, ss, acpi, ds, vme], topology: {cores: 6, threads: 2, sockets: 1}} 2014-02-05 16:36:46.008 28458 INFO nova.compute.manager [-] Lifecycle event 0 on VM 9b85b983-dced-4574-b14c-c72e4d92982a 2014-02-05 16:36:46.244 28458 INFO nova.compute.manager [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] During the sync_power process the instance has moved from host ctpcmp003 to host ctpcmp005 2014-02-05 16:41:09.634 28458 INFO nova.compute.manager [-] Lifecycle event 1 on VM 9b85b983-dced-4574-b14c-c72e4d92982a 2014-02-05 16:41:09.899 28458 INFO nova.compute.manager [-] [instance: 9b85b983-dced-4574-b14c-c72e4d92982a] During the sync_power process the instance has moved from host ctpcmp003 to host
[Yahoo-eng-team] [Bug 1269795] [NEW] Port tags not reliably implementing
Public bug reported: Environment: - Ubuntu 12.04.3 LTS - Grizzly 2013.1.3-0ubuntu1~cloud0 - Quantum with GRE Tunneling - OpenVSwitch 1.4.0-1ubuntu1.5 I'm getting inconsistent implementations of port tags for the Router internal interfaces and the DHCP's interface when they're created in OVS. For example, what I should be seeing is something like this: Port tap7ef1ee95-52 tag: 30 Interface tap7ef1ee95-52 type: internal Port qr-8bfc6675-3a tag: 13 Interface qr-8bfc6675-3a type: internal However, I end up seeing something like this: Port tap2b520e87-5e Interface tap2b520e87-5e type: internal Port qr-ba0036f3-7e Interface qr-ba0036f3-7e type: internal It's not consistently happening - sometimes it actually is done correctly. The workaround to repair this is either to manually tag the interfaces, which can be done if at least one of them was tagged, or to restart 'quantum-plugin-openvswitch-agent', which unfortunately causes a drop in connectivity for those which were correctly tagged. Does anyone know under which conditions this issue may occur and whether there are better workarounds? ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1269795 Title: Port tags not reliably implementing Status in OpenStack Neutron (virtual network service): New Bug description: Environment: - Ubuntu 12.04.3 LTS - Grizzly 2013.1.3-0ubuntu1~cloud0 - Quantum with GRE Tunneling - OpenVSwitch 1.4.0-1ubuntu1.5 I'm getting inconsistent implementations of port tags for the Router internal interfaces and the DHCP's interface when they're created in OVS. For example, what I should be seeing is something like this: Port tap7ef1ee95-52 tag: 30 Interface tap7ef1ee95-52 type: internal Port qr-8bfc6675-3a tag: 13 Interface qr-8bfc6675-3a type: internal However, I end up seeing something like this: Port tap2b520e87-5e Interface tap2b520e87-5e type: internal Port qr-ba0036f3-7e Interface qr-ba0036f3-7e type: internal It's not consistently happening - sometimes it actually is done correctly. The workaround to repair this is either to manually tag the interfaces, which can be done if at least one of them was tagged, or to restart 'quantum-plugin-openvswitch-agent', which unfortunately causes a drop in connectivity for those which were correctly tagged. Does anyone know under which conditions this issue may occur and whether there are better workarounds? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1269795/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1260281] [NEW] Rendering of dashboard is broken in Internet Explorer
Public bug reported: In Internet Explorer (tested with IE9 and IE10) the rendering of various dashboard components is broken. - Content section is shown below the left hand navigation menu most often, unless you have a super-wide screen - Network Topology network names do not display inside the vertical network bar - The rounded edges do not render - The buttons look funny While I realise that some of these are due to differences in the way that IE renders CSS we do feel that it's important to ensure that using IE for Openstack End-Users and Administrators gives a reasonable experience. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1260281 Title: Rendering of dashboard is broken in Internet Explorer Status in OpenStack Dashboard (Horizon): New Bug description: In Internet Explorer (tested with IE9 and IE10) the rendering of various dashboard components is broken. - Content section is shown below the left hand navigation menu most often, unless you have a super-wide screen - Network Topology network names do not display inside the vertical network bar - The rounded edges do not render - The buttons look funny While I realise that some of these are due to differences in the way that IE renders CSS we do feel that it's important to ensure that using IE for Openstack End-Users and Administrators gives a reasonable experience. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1260281/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1260454] [NEW] Add cinder 'extend' volume functionality
Public bug reported: Cinder now has the ability to 'extend' (ie grow/expand/resize up) a volume. This functionality should be exposed through Horizon. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1260454 Title: Add cinder 'extend' volume functionality Status in OpenStack Dashboard (Horizon): New Bug description: Cinder now has the ability to 'extend' (ie grow/expand/resize up) a volume. This functionality should be exposed through Horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1260454/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp