[Yahoo-eng-team] [Bug 1588170] Re: Should update nova api version to 2.1
duplicate bug..:( ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588170 Title: Should update nova api version to 2.1 Status in neutron: Invalid Bug description: As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova. So should update the version from 2.0 to 2.1 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1588170/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1
** Also affects: python-openstackclient Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588171 Title: Should update nova api version to 2.1 Status in Ceilometer: In Progress Status in Cinder: New Status in heat: In Progress Status in neutron: In Progress Status in octavia: New Status in python-openstackclient: New Status in OpenStack Search (Searchlight): In Progress Bug description: The nova team has decided to removew nova v2 API code completly. And it will be merged very soon: https://review.openstack.org/#/c/311653/ we should bump to use v2.1 ASAP To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1
** Changed in: cinder Assignee: (unassigned) => zhaobo (zhaobo6) ** Also affects: octavia Importance: Undecided Status: New ** Changed in: octavia Assignee: (unassigned) => zhaobo (zhaobo6) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588171 Title: Should update nova api version to 2.1 Status in Ceilometer: In Progress Status in Cinder: New Status in heat: In Progress Status in neutron: In Progress Status in octavia: New Status in python-openstackclient: New Status in OpenStack Search (Searchlight): In Progress Bug description: The nova team has decided to removew nova v2 API code completly. And it will be merged very soon: https://review.openstack.org/#/c/311653/ we should bump to use v2.1 ASAP To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1
** Also affects: ceilometer Importance: Undecided Status: New ** Also affects: cinder Importance: Undecided Status: New ** Also affects: searchlight Importance: Undecided Status: New ** Also affects: heat Importance: Undecided Status: New ** Changed in: searchlight Assignee: (unassigned) => Zhenyu Zheng (zhengzhenyu) ** Changed in: ceilometer Assignee: (unassigned) => Zhenyu Zheng (zhengzhenyu) ** Description changed: - As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova. - So should update the version from 2.0 to 2.1 + The nova team has decided to removew nova v2 API code completly. And it will be merged + very soon: https://review.openstack.org/#/c/311653/ + + we should bump to use v2.1 ASAP -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588171 Title: Should update nova api version to 2.1 Status in Ceilometer: New Status in Cinder: New Status in heat: New Status in neutron: New Status in OpenStack Search (Searchlight): In Progress Bug description: The nova team has decided to removew nova v2 API code completly. And it will be merged very soon: https://review.openstack.org/#/c/311653/ we should bump to use v2.1 ASAP To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588170] [NEW] Should update nova api version to 2.1
Public bug reported: As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova. So should update the version from 2.0 to 2.1 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588170 Title: Should update nova api version to 2.1 Status in neutron: New Bug description: As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova. So should update the version from 2.0 to 2.1 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1588170/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588171] [NEW] Should update nova api version to 2.1
Public bug reported: The nova team has decided to removew nova v2 API code completly. And it will be merged very soon: https://review.openstack.org/#/c/311653/ we should bump to use v2.1 ASAP ** Affects: ceilometer Importance: Undecided Assignee: Zhenyu Zheng (zhengzhenyu) Status: New ** Affects: cinder Importance: Undecided Status: New ** Affects: heat Importance: Undecided Status: New ** Affects: neutron Importance: Undecided Assignee: zhaobo (zhaobo6) Status: New ** Affects: searchlight Importance: Undecided Assignee: Zhenyu Zheng (zhengzhenyu) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => zhaobo (zhaobo6) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588171 Title: Should update nova api version to 2.1 Status in Ceilometer: New Status in Cinder: New Status in heat: New Status in neutron: New Status in OpenStack Search (Searchlight): In Progress Bug description: The nova team has decided to removew nova v2 API code completly. And it will be merged very soon: https://review.openstack.org/#/c/311653/ we should bump to use v2.1 ASAP To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588171] [NEW] Should update nova api version to 2.1
You have been subscribed to a public bug: As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova. So should update the version from 2.0 to 2.1 ** Affects: neutron Importance: Undecided Assignee: zhaobo (zhaobo6) Status: New -- Should update nova api version to 2.1 https://bugs.launchpad.net/bugs/1588171 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587944] Re: incorrect title of quota:vif_outbound_peak
Reviewed: https://review.openstack.org/324021 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=85375d46c72e6d580315b0626927e84cde081393 Submitter: Jenkins Branch:master commit 85375d46c72e6d580315b0626927e84cde081393 Author: Niall Bunting Date: Wed Jun 1 16:50:20 2016 + Incorrect title for Outbound Peak Changes the title to the correct one. Change-Id: I6c71cd9a1489e4692cdfce252beda16b6e1c670a Closes-Bug: 1587944 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1587944 Title: incorrect title of quota:vif_outbound_peak Status in Glance: Fix Released Bug description: in metadefs/compute-quota.json: ... "quota:vif_outbound_burst": { "title": "Quota: VIF Outbound Burst", "description": "Network Virtual Interface (VIF) outbound burst in total kilobytes. Specifies the amount of bytes that can be burst at peak speed.", "type": "integer" }, "quota:vif_outbound_peak": { "title": "Quota: VIF Outbound Burst", "description": "Network Virtual Interface (VIF) outbound peak in kilobytes per second. Specifies maximum rate at which an interface can send data.", "type": "integer" } ... the title of if_outbound_peak should be "Quota: VIF Outbound Peak": 102c102 < "title": "Quota: VIF Outbound Burst", --- > "title": "Quota: VIF Outbound Peak", To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1587944/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1514622] Re: Resource create failed with erro "urce Create Failed: Stackvalidationfailed: Resources.Vdu1: Property Error: "
Insufficient information to diagnose the problem ** Changed in: tacker Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1514622 Title: Resource create failed with erro "urce Create Failed: Stackvalidationfailed: Resources.Vdu1: Property Error: " Status in OpenStack Compute (nova): Invalid Status in tacker: Invalid Bug description: I tried to run tacker functional test for creating vnf without any yaml file and it resulted with below error. Resource creation failed with below error tacker.vm.drivers.heat.heat_DeviceHeat-dfd83f2a-ee12-4680-b192-3df6135bb5ce Create Failed Resource Create Failed: Stackvalidationfailed: Resources.Vdu1: Property Error: Vdu1.Properties.Flavor: Unexpected Api Error. Please Report This At Http://Bugs.Launchpad.Net/Nova/ And Attach The Nova Api Log If Possible. (Http 500) (Request-Id: Req-6b0021e8-Eda1-40af-B014-3d1a1c310bc8) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1514622/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3
Reviewed: https://review.openstack.org/323280 Committed: https://git.openstack.org/cgit/openstack/tempest/commit/?id=2e2c83a52765aad347176366074ec3c94366ad10 Submitter: Jenkins Branch:master commit 2e2c83a52765aad347176366074ec3c94366ad10 Author: Yatin Kumbhare Date: Mon May 30 22:45:58 2016 +0530 Keep py3.X compatibility for urllib Change-Id: Iba10637688ada66f2e3003cd87bbba7d4db4abc7 Closes-Bug: #1280105 ** Changed in: tempest Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1280105 Title: urllib/urllib2 is incompatible for python 3 Status in Ceilometer: Fix Released Status in Cinder: In Progress Status in Fuel for OpenStack: Fix Committed Status in Glance: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in Magnum: In Progress Status in Manila: Fix Committed Status in neutron: Fix Released Status in python-troveclient: Fix Released Status in refstack: Fix Released Status in Sahara: Fix Released Status in tacker: In Progress Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): In Progress Status in Zuul: In Progress Bug description: urllib/urllib2 is incompatible for python 3 To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588136] [NEW] The asterisk should be red to be consistent with other error message
Public bug reported: When the required field leaved blank, all the error message are in red only the asterisk is in blue. All the error message should be in unified style. Please see the screenshot for more detail. ** Affects: horizon Importance: Undecided Assignee: qiaomin032 (chen-qiaomin) Status: In Progress ** Attachment added: "image-required.jpeg" https://bugs.launchpad.net/bugs/1588136/+attachment/4674882/+files/image-required.jpeg -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1588136 Title: The asterisk should be red to be consistent with other error message Status in OpenStack Dashboard (Horizon): In Progress Bug description: When the required field leaved blank, all the error message are in red only the asterisk is in blue. All the error message should be in unified style. Please see the screenshot for more detail. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1588136/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1455460] Re: resize server fails silently
Ok. ** Changed in: nova Status: In Progress => Invalid ** Changed in: nova Importance: Medium => Undecided ** Changed in: nova Assignee: Charlotte Han (hanrong) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1455460 Title: resize server fails silently Status in OpenStack Compute (nova): Invalid Bug description: Attempt to resize instance from bigger flavor to smaller fails, but no user notification in cli happened: root@node-7:~# nova resize --poll a4 1 Server resizing... 100% complete while nova-compute.log has trace: 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 137, in _dispatch_and_reply 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 180, in _dispatch 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 126, in _do_dispatch 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher payload) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 298, in decorated_function 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher pass 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 284, in decorated_function 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 348, in decorated_function 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 272, in decorated_function 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher migration.instance_uuid, exc_info=True) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 259, in decorated_function 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 326, in decorated_function 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2015-05-15 09:39:45.968 32005 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2015-05
[Yahoo-eng-team] [Bug 1588118] [NEW] In lbaasv2, when I create a listener, the function _check_pool_loadbalancer_match is not used at the suitable place
Public bug reported: In lbaasv2,listener can be created with para loadbalancer and default_pool_id.the code is following: if default_pool_id: self._check_pool_exists(context, default_pool_id) # Get the loadbalancer from the default_pool_id if not lb_id: default_pool = self.db.get_pool(context, default_pool_id) lb_id = default_pool.loadbalancer.id listener['loadbalancer_id'] = lb_id elif not lb_id: raise sharedpools.ListenerMustHaveLoadbalancer() if default_pool_id and lb_id: self._check_pool_loadbalancer_match( context, default_pool_id, lb_id) function _check_pool_loadbalancer_match is used to make sure default_pool has the same lb_id as given. But if listener is created with no lb_id,lb_id is set to default_pool.loadbalancer.id. So the check _check_pool_loadbalancer_match does repeated. The following is better: if default_pool_id: self._check_pool_exists(context, default_pool_id) # Get the loadbalancer from the default_pool_id if not lb_id: default_pool = self.db.get_pool(context, default_pool_id) listener['loadbalancer_id'] = default_pool.loadbalancer.id elif not lb_id: raise sharedpools.ListenerMustHaveLoadbalancer() if default_pool_id and lb_id: self._check_pool_loadbalancer_match( context, default_pool_id, lb_id) ** Affects: neutron Importance: Undecided Assignee: JingLiu (liu-jing5) Status: New ** Changed in: neutron Assignee: (unassigned) => JingLiu (liu-jing5) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588118 Title: In lbaasv2,when I create a listener,the function _check_pool_loadbalancer_match is not used at the suitable place Status in neutron: New Bug description: In lbaasv2,listener can be created with para loadbalancer and default_pool_id.the code is following: if default_pool_id: self._check_pool_exists(context, default_pool_id) # Get the loadbalancer from the default_pool_id if not lb_id: default_pool = self.db.get_pool(context, default_pool_id) lb_id = default_pool.loadbalancer.id listener['loadbalancer_id'] = lb_id elif not lb_id: raise sharedpools.ListenerMustHaveLoadbalancer() if default_pool_id and lb_id: self._check_pool_loadbalancer_match( context, default_pool_id, lb_id) function _check_pool_loadbalancer_match is used to make sure default_pool has the same lb_id as given. But if listener is created with no lb_id,lb_id is set to default_pool.loadbalancer.id. So the check _check_pool_loadbalancer_match does repeated. The following is better: if default_pool_id: self._check_pool_exists(context, default_pool_id) # Get the loadbalancer from the default_pool_id if not lb_id: default_pool = self.db.get_pool(context, default_pool_id) listener['loadbalancer_id'] = default_pool.loadbalancer.id elif not lb_id: raise sharedpools.ListenerMustHaveLoadbalancer() if default_pool_id and lb_id: self._check_pool_loadbalancer_match( context, default_pool_id, lb_id) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1588118/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588112] [NEW] instance stuck at error state after trying to migrate it to a misconfigured nova-compute node
Public bug reported: I was trying to add another nova-compute node into the cluster. Afterwards I tried to do a live migration to verify the configuration. The instance got stuck at error state and showing the task of migrating. I then fixed the misconfigure with the new added compute node. But the instance still not got fixed. (the misconfigure was the network of the server and nova.conf file) It seems that I can do nothing with but deleting the instance. some commands and results below: (openstack) server list +--+--++--+ | ID | Name | Status | Networks | +--+--++--+ | 84d31496-5d0a-4dbf-99bc-363d018a30a8 | ttt | ACTIVE | test=192.168.1.3 | | d754e7e4-6dba-45a7-a303-902f45ff4ca0 | testfromdisk | ERROR | test=192.168.1.2 | +--+--++--+ server show d754e7e4-6dba-45a7-a303-902f45ff4ca0 +--++ | Field| Value | +--++ | OS-DCF:diskConfig| AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | nova1 | | OS-EXT-SRV-ATTR:hypervisor_hostname | nova1 | | OS-EXT-SRV-ATTR:instance_name| instance-0003 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| migrating | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2016-05-31T05:15:28.00 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses| test=192.168.1.2 | | config_drive | | | created | 2016-05-31T05:15:15Z | | fault| {u'message': u'Compute host nova2 could not be found.', u'code': 404, u'created': u'2016-06-01T07:08:13Z'} | | flavor | m1.tiny (1) | | hostId | b1f5ce288eb6e023bb7a7fcd2adbea8f32650690fb6b34d070ab8e22 | | id | d754e7e4-6dba-45a7-a303-902f45ff4ca0 | | image| | | key_name | None | | name | testfromdisk | | os-extended-volumes:volumes_attached | [{u'id': u'b74b2eff-0955-4d13-adac-7964e109d4ff'}]
[Yahoo-eng-team] [Bug 1554791] Re: Horizon should use upper-constraints in testing
Reviewed: https://review.openstack.org/290203 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=315958aab8273cd2dbfdd617b173d5c063a88948 Submitter: Jenkins Branch:master commit 315958aab8273cd2dbfdd617b173d5c063a88948 Author: Richard Jones Date: Wed Mar 9 11:30:20 2016 +1100 Use upper-constraints in tox test environments Recently OpenStack introduced a mechanism to specify a constrained "working set" of packages that are "guaranteed" to produce a working OpenStack environment. This pinning of packages limits the more broadly-defined requirements.txt which is managed by global-requirements. This patch modifies our tox test environment to use upper-constraints and explictly removes those requirements from the "venv" tox environment that is used by some commands in infra. Change-Id: I84582370e139fc5812bc85ae5341f7f9c8b93ff5 Closes-Bug: 1554791 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1554791 Title: Horizon should use upper-constraints in testing Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Recently OpenStack introduced a mechanism to specify a constrained "working set" of packages that are "guaranteed" to produce a working OpenStack environment. This pinning of packages limits the more broadly-defined requirements.txt which is managed by global- requirements. Even though it pins package versions, it is called "upper-constraints". We should include those constraints in our test runs. A mechanism exists to allow constraints to be overridden for specific patches by using Depends-On to a constraints update, allowing testing of new constraints. Given enough test coverage in the constraints updates (to be addressed in a separate patch to the gate jobs for that project) this would have allowed Horizon to detect that the recent heatclient update broke Horizon, but also would have allowed the Horizon gate to continue unaffected by the new heatclient update (because we would have been pinned to a previous, working version). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1554791/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1583611] Re: Compiled .mo files are not included in keystone builds
Reviewed: https://review.openstack.org/318527 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=d87a098cc24b3296c49e5fa16f1eda197f1547e6 Submitter: Jenkins Branch:master commit d87a098cc24b3296c49e5fa16f1eda197f1547e6 Author: Alfredo Moralejo Date: Thu May 19 11:12:09 2016 +0200 Add .mo files to MANIFEST.in Translations .mo files created with compile_catalog should be included in the build if they exist. Additionally, .pot files have been removed from source git in https://review.openstack.org/307589 so they can be ignored in MANIFEST.in. Closes-Bug: 1583611 Change-Id: Ia0c87360d34936d432eceb22210f03f8592c6fec ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1583611 Title: Compiled .mo files are not included in keystone builds Status in OpenStack Identity (keystone): Fix Released Bug description: When I create compiled .mo files using command: # python setup.py compile_catalog The .mo created files are not included when i build with # python setup.py build To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1583611/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1586267] Re: python 3.4 support for urlparse
Reviewed: https://review.openstack.org/322116 Committed: https://git.openstack.org/cgit/openstack/murano-dashboard/commit/?id=368476d45b91ebaeedb1893b7fe7f375753a667f Submitter: Jenkins Branch:master commit 368476d45b91ebaeedb1893b7fe7f375753a667f Author: Yatin Kumbhare Date: Fri May 27 15:40:36 2016 +0530 python 3.4 support for urlparse Instead of using import urlparse Make use of import six.moves.urllib.parse as urlparse Change-Id: I55066b38d3de2ce58e6c473dd29aff64b2823663 Closes-Bug: #1586267 ** Changed in: murano Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1586267 Title: python 3.4 support for urlparse Status in OpenStack Dashboard (Horizon): In Progress Status in Murano: Fix Released Status in OpenStack Search (Searchlight): Fix Released Bug description: Instead of using import urlparse Make use of import six.moves.urllib.parse as urlparse for python3.4 support into searchlight. Similar work pointer from core project: https://blueprints.launchpad.net/nova/+spec/nova-python3-newton To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1586267/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1586066] Re: handle oslo.log verbose deprecation
** Also affects: tacker Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1586066 Title: handle oslo.log verbose deprecation Status in neutron: Fix Released Status in tacker: New Status in OpenStack DBaaS (Trove): In Progress Bug description: In https://review.openstack.org/#/c/314573/ the verbose option was deleted. Time for projects to do the same. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1586066/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588071] [NEW] IPv6 filter action should be hidden if OPENSTACK_NEUTRON_NETWORK.enable_ipv6 is set to False
Public bug reported: In the Admin > Instances panel, one of the filter options is "IPv6 Address =". By overriding the init function, we can hide this option based on the value of OPENSTACK_NEUTRON_NETWORK.enable_ipv6 in local_settings. ** Affects: horizon Importance: Undecided Status: New ** Description changed: - In the Admin > Instances panel, one of the filter options is IPv6. By - overriding the init function, we can hide this option based on the value - of OPENSTACK_NEUTRON_NETWORK.enable_ipv6 in local_settings. + In the Admin > Instances panel, one of the filter options is "IPv6 + Address =". By overriding the init function, we can hide this option + based on the value of OPENSTACK_NEUTRON_NETWORK.enable_ipv6 in + local_settings. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1588071 Title: IPv6 filter action should be hidden if OPENSTACK_NEUTRON_NETWORK.enable_ipv6 is set to False Status in OpenStack Dashboard (Horizon): New Bug description: In the Admin > Instances panel, one of the filter options is "IPv6 Address =". By overriding the init function, we can hide this option based on the value of OPENSTACK_NEUTRON_NETWORK.enable_ipv6 in local_settings. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1588071/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588067] [NEW] Designate DNS driver for neutron fails for SSL based endpoints.
Public bug reported: Summary: I have mitaka based deployment of neutron and designate and while trying to test the native integration of neutron with designate using this guide http://docs.openstack.org/mitaka/networking-guide/adv-config-dns.html I found out my DNS records are not getting created like on port-update or any floating ip operations as expected. This is because the the endpoints in deployments are SSL based (https) and the neutron code of mitaka that gets the keystoneclient session before initiating designate client, has no option to allow us to set verify=True/False from neutron.conf or in code itself https://github.com/openstack/neutron/blob/stable/mitaka/neutron/services/externaldns/drivers/designate/driver.py#L85 this makes it impossible to use neutron integration with designate over https based endpoints until the code is changed to: """ _SESSION = session.Session(verify=False) """ Description: Neutron has option to use external DNS driver in mitaka, such as designate. For that , we need to set the designate options in [designate] section of neutron.conf . For example: """ [designate] url = http://55.114.111.93:9001/v2 admin_auth_url = http://55.114.111.93:35357/v2.0 admin_username = neutron admin_password = x5G90074 admin_tenant_name = service allow_reverse_dns_lookup = True ipv4_ptr_zone_prefix_size = 24 ipv6_ptr_zone_prefix_size = 116 """ the above example works fine when your url and admin_auth_url are http based endpoints. The neutron code uses options of designate section to get a session from keystone and uses that session to initiate designate admin client session as seen in the neutron code here https://github.com/openstack/neutron/blob/stable/mitaka/neutron/services/externaldns/drivers/designate/driver.py#L89 In the case, when a deployment has https(SSL terminated) based endpoints, meaning both url and admin_auth_url has https, the keystone session is made in neutron code using _SESSION = session.Session() the default behavior of keystoneclient is that if a url has https, then always set verify=True and use the ca file for verification. but neither the option to provide a ca file or set verify=True/False is done neutron code for designate driver, this makes it impossible to use the integration over SSL based endpoints. As an example of running the same code of mitaka from neutron :: """ >>> admin_auth = >>> password.Password(auth_url="https://10.240.128.120:6100/v2.0",username="admin",password="admin",tenant_name="service";) >>> _SESSION = session.Session() >>> admin_client = d_client.Client(session=_SESSION, auth=admin_auth) >>> admin_client.zones.list() keystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://10.240.128.120:6100/v2.0/tokens: [Errno 1] _ssl.c:523: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed """ after altering the session initiation to set verify=False """ _SESSION = session.Session(verify=False) >>> admin_client = d_client.Client(session=_SESSION, auth=admin_auth) >>> admin_client.zones.list() [] """ Proposed fix: have an oslo opt for [designate] to let users specify insecure operations or set a ca file and use that info from neutron.conf to initiate keystone session before getting a designateclient ** Affects: neutron Importance: Undecided Status: New ** Tags: dns mitaka-backport-potential ** Tags added: mitaka-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588067 Title: Designate DNS driver for neutron fails for SSL based endpoints. Status in neutron: New Bug description: Summary: I have mitaka based deployment of neutron and designate and while trying to test the native integration of neutron with designate using this guide http://docs.openstack.org/mitaka/networking-guide/adv-config-dns.html I found out my DNS records are not getting created like on port-update or any floating ip operations as expected. This is because the the endpoints in deployments are SSL based (https) and the neutron code of mitaka that gets the keystoneclient session before initiating designate client, has no option to allow us to set verify=True/False from neutron.conf or in code itself https://github.com/openstack/neutron/blob/stable/mitaka/neutron/services/externaldns/drivers/designate/driver.py#L85 this makes it impossible to use neutron integration with designate over https based endpoints until the code is changed to: """ _SESSION = session.Session(verify=False) """ Description: Neutron has option to use external DNS driver in mitaka, such as designate. For that , we need to set the designate options in [designate] section of neutron.conf . For example: """ [designate] url = http://55.114.111.93:9001/v2 admin_auth_url = http://55.114.111.93:35357/v2.0 admin_username = neutron
[Yahoo-eng-team] [Bug 1588003] Re: Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration
stable/liberty is in security fix and critical regression fix only mode, which this doesn't qualify for. So we won't fix this upstream for stable/liberty, and will exclude the multinode non-voting job from stable/liberty changes since it won't work after the d-g change lands that depends on the nova change. ** Changed in: nova/liberty Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1588003 Title: Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) liberty series: Won't Fix Status in OpenStack Compute (nova) mitaka series: In Progress Bug description: The _compare_cpu() method of Nova's libvirt driver performs guest vCPU model to destination host CPU model comparison (during live migration) even in the case of emulated (QEMU "TCG" mode) guests, where the CPU instructions are emulated completely in software, and no hardware acceleration, such as KVM is involved. From nova/virt/libvirt/driver.py: [...] 5464 def _compare_cpu(self, guest_cpu, host_cpu_str, instance): 5465 """Check the host is compatible with the requested CPU [...][...] 5481 if CONF.libvirt.virt_type not in ['qemu', 'kvm']: 5482 return 5483 Skip the comparison for 'qemu' part above. Fix for master branch is here: https://review.openstack.org/#/c/323467/ -- libvirt: Skip CPU compatibility check for emulated guests This bug is for stable branch backports: Mitaka and Liberty. [Thanks: Daniel P. Berrange for the pointer.] Related context and references -- (a) This upstream discussion thread where using the custom CPU model ("gate64") is causing live migration CI jobs to fail. http://lists.openstack.org/pipermail/openstack-dev/2016-May/095811.html -- "[gate] [nova] live migration, libvirt 1.3, and the gate" (b) Gate DevStack change to avoid setting the custom CPU model in nova.conf https://review.openstack.org/#/c/320925/4 -- don't set libvirt cpu_model To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1588003/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588064] [NEW] secret_key.py doesn't warn when reverting to insecure key generation
Public bug reported: secret_key.py is used to generate a 64-bit key used by Django; however when it cannot find the 'SystemRandom' extension to the 'random' package it reverts to a generator that is, by documentation, not secure cryptographically. Witness: https://docs.python.org/2/library/random.html Reverting to the generator without leaving a warning is a hazard from a system security perspective. We should log at WARN that there is a possible security issue in the configuration. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1588064 Title: secret_key.py doesn't warn when reverting to insecure key generation Status in OpenStack Dashboard (Horizon): New Bug description: secret_key.py is used to generate a 64-bit key used by Django; however when it cannot find the 'SystemRandom' extension to the 'random' package it reverts to a generator that is, by documentation, not secure cryptographically. Witness: https://docs.python.org/2/library/random.html Reverting to the generator without leaving a warning is a hazard from a system security perspective. We should log at WARN that there is a possible security issue in the configuration. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1588064/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1581348] Re: Can't delete a v4 csnat port when there is a v6 router interface attached
Reviewed: https://review.openstack.org/315926 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=084173338e5e21ff37f042f69718237b94843665 Submitter: Jenkins Branch:master commit 084173338e5e21ff37f042f69718237b94843665 Author: Hong Hui Xiao Date: Fri May 13 07:12:57 2016 + DVR: Fix check multiprefix when delete ipv4 router interface Current code will prevent from deleting router centralized snat port, when there is a ipv6 subnet attatched to DVR. This is correct when deleting a v6 router centralized snat port, because multiple v6 subnets share one router port. But it is not correct when deleteing a v4 router centralized snat port, because v4 subnet doesn't share router port with v6 subnet. Deleteing a v4 router centralized snat port should be allowed no matter if there is v6 subnet attached to DVR. Change-Id: I2d06c8c79f9ff9a9300a94bcbbae13569e4d963e Closes-bug: #1581348 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1581348 Title: Can't delete a v4 csnat port when there is a v6 router interface attached Status in neutron: Fix Released Bug description: Reproduce: 1) I enable DVR in devstack. After installation, there is a DVR, an ipv4+ipv6 router gateway in DVR, an ipv4 router interface in DVR, and an ipv6 router interface in DVR. 2) I want to use delete the v4 subnet. So, I delete the ipv4 router gateway. [fedora@normal-dvr devstack]$ neutron router-interface-delete router1 private-subnet Removed interface from router router1. 3) I try to delete the v4 subnet, but neutron server tell me that the subnet can't be deleted, because there is still port(s) being used. [fedora@normal-dvr devstack]$ neutron subnet-delete private-subnet Unable to complete operation on subnet d0282930-95ca-4f64-9ae9-8c22be9cb3ab: One or more ports have an IP allocation from this subnet. 4) Check the port-list, I found the csnat port is still there. [fedora@normal-dvr devstack]$ neutron port-list +--+--+---+-+ | id | name | mac_address | fixed_ips | +--+--+---+-+ | bf042acf-40d5-4503-b62e-7389a6fc9bca | | fa:16:3e:47:a5:40 | {"subnet_id": "d0282930-95ca-4f64-9ae9-8c22be9cb3ab", "ip_address": "10.0.0.3"} | +--+--+---+-+ 5) But look into the snat namespace, there is no such port there. Then I can't delete the subnet, because the port is there. I can't delete the port, because the port has a device owner network:router_centralized_snat. I even can't attach the subnet back to DVR, neutron server will tell me: Router already has a port on subnet. This problem will not be reproduce if there is no ipv6 subnet attached to DVR. Expect: Can use ipv4 no matter if there is ipv6 subnet attached to DVR. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1581348/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1582739] Re: [DVR][L3 HA] Unable to ping 8.8.8.8 from VM without floating ip
Reviewed: https://review.openstack.org/317541 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=899b6cb0652ee54c08d130ccd7edc3091b7d930b Submitter: Jenkins Branch:master commit 899b6cb0652ee54c08d130ccd7edc3091b7d930b Author: Ann Kamyshnikova Date: Tue May 17 17:25:00 2016 +0300 Pass ha_router_port flag for _snat_router_interfaces ports Currently, router_centralized_snat port can be bound to a host were l3-agent is in standby state (L3 HA + DVR case). As a result VM without floating ip is unable to reach external network. This change passes ha_router_port flag to _ensure_host_set_on_port when called for _snat_router_interfaces ports. Note: this issue is intermittent, without changes in l3_rpc.py unit test does not fail every time. Co-Authored-By: Oleg Bondarev Closes-bug: #1582739 Change-Id: I74bad578361ed7eac8cc6c740b06b66ab1530cd5 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1582739 Title: [DVR][L3 HA] Unable to ping 8.8.8.8 from VM without floating ip Status in neutron: Fix Released Bug description: On the environment with L3 HA and DVR enabled all pings from VM without floating ip to 8.8.8.8 were lost. This happened because router_centralized_snat port was binded to the wrong host were l3 agent was in standby state. root@node-4:~# neutron l3-agent-list-hosting-router a1de4263-08af-48cb-a9b5-400ebcd3ac1a +--+---++---+--+ | id | host | admin_state_up | alive | ha_state | +--+---++---+--+ | ae474016-48ee-4121-88b7-71b65f2a4244 | node-4.domain.tld | True | :-) | standby | | b93959c3-51da-45ae-8af7-aa90671953b4 | node-5.domain.tld | True | :-) | active | | ca082545-5c1a-4eb5-882b-b35bf76f9350 | node-2.domain.tld | True | :-) | standby | +--+---++---+--+ root@node-4:~# neutron port-show cd1e7af6-0aa7-444b-b0b3-37d04325655f +---+---+ | Field | Value | +---+---+ | admin_state_up| True | | allowed_address_pairs | | | binding:host_id | node-2.domain.tld | | binding:profile | {} | | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} | | binding:vif_type | ovs | | binding:vnic_type | normal | | created_at| 2016-05-17T08:55:58 | | description | | | device_id | a1de4263-08af-48cb-a9b5-400ebcd3ac1a | | device_owner | network:router_centralized_snat | | dns_name | | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "fe4e6901-f5ce-41b1-a7ce-582d70fe310e", "ip_address": "10.100.0.4"} | | id| cd1e7af6-0aa7-444b-b0b3-37d04325655f | | mac_address | fa:16:3e:f4:1d:6a | | name | | | network_id| 7e9e27d7-d331-4b09-8b28-04a0d5173af7 | | port_security_enabled | False |
[Yahoo-eng-team] [Bug 1588003] Re: Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration
** Also affects: nova/liberty Importance: Undecided Status: New ** Also affects: nova/mitaka Importance: Undecided Status: New ** Changed in: nova Status: New => In Progress ** Changed in: nova/mitaka Status: New => Confirmed ** Changed in: nova/mitaka Importance: Undecided => High ** Changed in: nova/liberty Status: New => Confirmed ** Changed in: nova/liberty Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1588003 Title: Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) liberty series: Confirmed Status in OpenStack Compute (nova) mitaka series: In Progress Bug description: The _compare_cpu() method of Nova's libvirt driver performs guest vCPU model to destination host CPU model comparison (during live migration) even in the case of emulated (QEMU "TCG" mode) guests, where the CPU instructions are emulated completely in software, and no hardware acceleration, such as KVM is involved. From nova/virt/libvirt/driver.py: [...] 5464 def _compare_cpu(self, guest_cpu, host_cpu_str, instance): 5465 """Check the host is compatible with the requested CPU [...][...] 5481 if CONF.libvirt.virt_type not in ['qemu', 'kvm']: 5482 return 5483 Skip the comparison for 'qemu' part above. Fix for master branch is here: https://review.openstack.org/#/c/323467/ -- libvirt: Skip CPU compatibility check for emulated guests This bug is for stable branch backports: Mitaka and Liberty. [Thanks: Daniel P. Berrange for the pointer.] Related context and references -- (a) This upstream discussion thread where using the custom CPU model ("gate64") is causing live migration CI jobs to fail. http://lists.openstack.org/pipermail/openstack-dev/2016-May/095811.html -- "[gate] [nova] live migration, libvirt 1.3, and the gate" (b) Gate DevStack change to avoid setting the custom CPU model in nova.conf https://review.openstack.org/#/c/320925/4 -- don't set libvirt cpu_model To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1588003/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588042] [NEW] UnboundLocalError with native openflow agent on switch timeout
Public bug reported: In situation when there is some cached datapath_id and openflow switch doesn't respond in time (causing RuntimeError), UnboundLocalError is raised 2016-05-25 14:57:28 ERR ryu.lib.hub [req-6efe2697-b494-4c54-97dc-4d8d1f43cab6 - - - - -] hub: uncaught exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ryu/lib/hub.py", line 52, in _launch func(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py", line 35, in agent_main_wrapper ovs_agent.main(bridge_classes) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2118, in main agent.daemon_loop() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2041, in daemon_loop self.rpc_loop(polling_manager=pm) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1910, in rpc_loop ovs_status = self.check_ovs_status() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1718, in check_ovs_status status = self.int_br.check_canary_table() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table flows = self.dump_flows(constants.CANARY_TABLE) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 125, in dump_flows (dp, ofp, ofpp) = self._get_dp() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py", line 61, in _get_dp if new_dpid_str != dpid_str: UnboundLocalError: local variable 'dpid_str' referenced before assignment ** Affects: neutron Importance: Undecided Assignee: Inessa Vasilevskaya (ivasilevskaya) Status: New ** Changed in: neutron Assignee: (unassigned) => Inessa Vasilevskaya (ivasilevskaya) ** Summary changed: - UnboundLocalError with native openflow agent when switch timeout + UnboundLocalError with native openflow agent on switch timeout -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1588042 Title: UnboundLocalError with native openflow agent on switch timeout Status in neutron: New Bug description: In situation when there is some cached datapath_id and openflow switch doesn't respond in time (causing RuntimeError), UnboundLocalError is raised 2016-05-25 14:57:28 ERR ryu.lib.hub [req-6efe2697-b494-4c54-97dc-4d8d1f43cab6 - - - - -] hub: uncaught exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ryu/lib/hub.py", line 52, in _launch func(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py", line 35, in agent_main_wrapper ovs_agent.main(bridge_classes) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2118, in main agent.daemon_loop() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2041, in daemon_loop self.rpc_loop(polling_manager=pm) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1910, in rpc_loop ovs_status = self.check_ovs_status() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1718, in check_ovs_status status = self.int_br.check_canary_table() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table flows = self.dump_flows(constants.CANARY_TABLE) File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 125, in dump_flows (dp, ofp, ofpp) = self._get_dp() File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py", line 61, in _get_dp if new_dpid_str != dpid_str: UnboundLocalError: local variable 'dpid_str' referenced before assignment To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1588042/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588039] [NEW] Security groups need to be created in non-English characters
Public bug reported: There are many localizations that do not use standard ASCII, and the code currently restricts individuals from creating Security Groups with names containing non-ASCII characters. You can see in https://review.openstack.org/#/c/54007/ and the associated launchpad tickets that the current validation was only loosening overly-restrictive validation; I believe we need to restrict them even less. Unless there are specific reasons that the names need to be ASCII, we should loosen this restriction. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1588039 Title: Security groups need to be created in non-English characters Status in OpenStack Dashboard (Horizon): New Bug description: There are many localizations that do not use standard ASCII, and the code currently restricts individuals from creating Security Groups with names containing non-ASCII characters. You can see in https://review.openstack.org/#/c/54007/ and the associated launchpad tickets that the current validation was only loosening overly-restrictive validation; I believe we need to restrict them even less. Unless there are specific reasons that the names need to be ASCII, we should loosen this restriction. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1588039/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1586082] Re: vpnaas: "Failed to enable vpn process on router " due to wrong rundir
Reviewed: https://review.openstack.org/321601 Committed: https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=894af39191143cc1b1795a5e3495480fab3f68ab Submitter: Jenkins Branch:master commit 894af39191143cc1b1795a5e3495480fab3f68ab Author: Thomas Bechtold Date: Thu May 26 16:57:13 2016 +0200 Use strongswan piddir as bind mount dir Instead of hardcoding /var/run as bind mount dir, use the directory strongswan is using for creating pid files and sockets. The directory can be deteced via the "ipsec --piddir" command. Co-Authored-By: Ralf Haferkamp Closes-Bug: #1586082 Change-Id: I1d78f654945329738b06034e81423e8959e39085 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1586082 Title: vpnaas: "Failed to enable vpn process on router " due to wrong rundir Status in neutron: Fix Released Bug description: When using vpnaas with strongswan 5.1 and strongswan uses as "piddir" (see "ipsec --piddir) something different than "/var/run", the error is: 2016-05-26 15:22:22.541 29695 DEBUG neutron.agent.linux.utils [req-0a2127cd-125e-4d4b-b6db-04085baf5602 74cdd700184948c2b7fad2caa003ec2f a14c2b3f29d444db8a99176bac54b26b - - -] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-2691a9d2-fb5e-4d86-9023-ab3681bda8d3', 'neutron-vpn-netns-wrapper', '--mount_paths=/etc:/var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/etc,/var/run:/var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/var/run', '--cmd=ipsec,up,e4c7ea00-db44-4387-9417-399e15ef410c'] create_process /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:85 2016-05-26 15:22:22.899 29695 ERROR neutron.agent.linux.utils [req-0a2127cd-125e-4d4b-b6db-04085baf5602 74cdd700184948c2b7fad2caa003ec2f a14c2b3f29d444db8a99176bac54b26b - - -] Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-2691a9d2-fb5e-4d86-9023-ab3681bda8d3', 'neutron-vpn-netns-wrapper', '--mount_paths=/etc:/var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/etc,/var/run:/var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/var/run', '--cmd=ipsec,up,e4c7ea00-db44-4387-9417-399e15ef410c'] Exit code: 7 Stdin: Stdout: Command: ['mount', '--bind', '/var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/etc', '/etc'] Exit code: 0 Stdout: Stderr: Command: ['mount', '--bind', '/var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/var/run', '/var/run'] Exit code: 0 Stdout: Stderr: Command: ['ipsec', 'up', 'e4c7ea00-db44-4387-9417-399e15ef410c'] Exit code: 7 Stdout: Stderr: Stderr: 2016-05-26 15:22:22.856 31074 INFO neutron.common.config [-] Logging enabled! 2016-05-26 15:22:22.856 31074 INFO neutron.common.config [-] /usr/bin/neutron-vpn-netns-wrapper version 7.0.5.dev91 2016-05-26 15:22:22.863 31074 INFO neutron_vpnaas.services.vpn.common.netns_wrapper [-] /var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/etc has been bind-mounted in /etc 2016-05-26 15:22:22.866 31074 INFO neutron_vpnaas.services.vpn.common.netns_wrapper [-] /var/lib/neutron/ipsec/2691a9d2-fb5e-4d86-9023-ab3681bda8d3/var/run has been bind-mounted in /var/run 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec [req-0a2127cd-125e-4d4b-b6db-04085baf5602 74cdd700184948c2b7fad2caa003ec2f a14c2b3f29d444db8a99176bac54b26b - - -] Failed to enable vpn process on router 2691a9d2-fb5e-4d86-9023-ab3681bda8d3 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec Traceback (most recent call last): 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 260, in enable 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec self.start() 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/strongswan_ipsec.py", line 166, in start 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec self._execute([self.binary, 'up', ipsec_site_conn['id']]) 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/strongswan_ipsec.py", line 107, in _execute 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec extra_ok_codes=extra_ok_codes) 2016-05-26 15:22:22.900 29695 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neu
[Yahoo-eng-team] [Bug 1588017] [NEW] network_data.json shouldn't include internal device name and type
Public bug reported: Nova currently injects in network_data.json the internal device name in the link id field and the device "type" in the link type field. This is wrong and could mislead users about how they should use and interpret those values. The nova-specs for network_data.json [1] specifies that: * the link id should be a "Generic, generated ID". * the link type should be "vif" if it's a virtual interface and "phy" for physical interface. The values currently used come from internal implementation details and shouldn't be passed on to the end-user. The implementation should be updated to respect the spirit of the spec. This will reduce confusion to initialization agent developers. [1] http://specs.openstack.org/openstack/nova- specs/specs/liberty/implemented/metadata-service-network-info.html#rest- api-impact ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1588017 Title: network_data.json shouldn't include internal device name and type Status in OpenStack Compute (nova): New Bug description: Nova currently injects in network_data.json the internal device name in the link id field and the device "type" in the link type field. This is wrong and could mislead users about how they should use and interpret those values. The nova-specs for network_data.json [1] specifies that: * the link id should be a "Generic, generated ID". * the link type should be "vif" if it's a virtual interface and "phy" for physical interface. The values currently used come from internal implementation details and shouldn't be passed on to the end-user. The implementation should be updated to respect the spirit of the spec. This will reduce confusion to initialization agent developers. [1] http://specs.openstack.org/openstack/nova- specs/specs/liberty/implemented/metadata-service-network-info.html #rest-api-impact To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1588017/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588014] [NEW] cloud-init doesn't always complete on azure machines
Public bug reported: Sometimes (like 1 in 100 times), my Juju/Azure deployment hangs with a machine stuck in 'pending'. On these machines, cloud-init-output.log says: ... Setting up qemu-utils (2.0.0+dfsg-2ubuntu1.24) ... Setting up cloud-image-utils (0.27-0ubuntu9.2) ... Setting up cloud-utils (0.27-0ubuntu9.2) ... Processing triggers for libc-bin (2.19-0ubuntu6.9) ... And cloud-init.log says: ... Jun 1 16:44:11 machine-3 [CLOUDINIT] cloud-init[DEBUG]: Ran 19 modules with 0 failures Jun 1 16:44:11 machine-3 [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime (quiet=False) Jun 1 16:44:11 machine-3 [CLOUDINIT] util.py[DEBUG]: Read 12 bytes from /proc/uptime Jun 1 16:44:11 machine-3 [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'modules' took 48.449 seconds (48.45) This seems to be related to this failure in waagent.log: root@machine-3:/var/log# cat waagent.log 2016/06/01 16:43:07 Azure Linux Agent Version: WALinuxAgent-2.0.16 2016/06/01 16:43:07 Linux Distribution Detected : Ubuntu 2016/06/01 16:43:07 Module /lib/modules/3.19.0-59-generic/kernel/drivers/ata/ata_piix.ko driver for ATAPI CD-ROM does not exist. 2016/06/01 16:43:07 mount: block device /dev/sr0 is write-protected, mounting read-only 2016/06/01 16:43:07 mount: you didn't specify a filesystem type for /dev/sr0 2016/06/01 16:43:07I will try type udf 2016/06/01 16:43:07 mount: you didn't specify a filesystem type for /dev/sr0 2016/06/01 16:43:07I will try type udf 2016/06/01 16:43:07 /dev/sr0 on /mnt/cdrom/secure type udf (ro) 2016/06/01 16:43:07 mount succeeded on attempt #1 2016/06/01 16:43:07 VMM Init script not found. Provisioning for Azure 2016/06/01 16:43:07 IPv4 address: 10.0.0.6 2016/06/01 16:43:07 MAC address: 00:0D:3A:71:79:7D 2016/06/01 16:43:07 Probing for Azure environment. 2016/06/01 16:43:07 DoDhcpWork: Setting socket.timeout=10, entering recv 2016/06/01 16:43:07 Set default gateway: 10.0.0.1 2016/06/01 16:43:07 Discovered Azure endpoint: 168.63.129.16 2016/06/01 16:43:07 Fabric preferred wire protocol version: 2015-04-05 2016/06/01 16:43:07 Negotiated wire protocol version: 2012-11-30 2016/06/01 16:43:07 SetBlockDeviceTimeout: Update the device sda with timeout 300 2016/06/01 16:43:07 SetBlockDeviceTimeout: Update the device sdb with timeout 300 2016/06/01 16:43:08 Retrieved GoalState from Azure Fabric. 2016/06/01 16:43:08 ExpectedState: Started 2016/06/01 16:43:08 ContainerId: adea2037-ec24-4752-b6f0-0b53f7dee188 2016/06/01 16:43:08 RoleInstanceId: 2d0f948c-f310-4290-b3dc-bde08efeb3d9._machine-3 2016/06/01 16:43:08 Public cert with thumbprint: 0C420F72F3884BD2CE799D50B9C1AFDB1B0C1072 was retrieved. 2016/06/01 16:43:18 ERROR:Can't find host key: /etc/ssh/ssh_host_rsa_key.pub 2016/06/01 16:43:22 Finished processing ExtensionsConfig.xml 2016/06/01 16:43:22 Successfully reported handler status 2016/06/01 16:55:24 ERROR:Socket IOError The read operation timed out, args:('The read operation timed out',) 2016/06/01 16:55:24 ERROR:Retry=0 2016/06/01 16:55:24 ERROR:HTTP Req: HEAD https://p437b4r90zrnkr2vcf51fm9q.blob.core.windows.net/osvhds/machine-3.988b8739-235f-4b55-a0ce-9e896a7eb595.status?sv=2014-02-14&sr=b&sig=tEiFGWxfACZ8Y2%2FEF%2Bl6ocG9BdhSwgNRQj%2Btpr3JKHY%3D&se=-01-01T00%3A00%3A00Z&sp=rw 2016/06/01 16:55:24 ERROR:HTTP Req: Data=None 2016/06/01 16:55:24 ERROR:HTTP Req: Header={'x-ms-version': '2014-02-14', 'x-ms-date': '2016-06-01T16:55:14Z'} 2016/06/01 16:55:24 ERROR:HTTP Err: response is empty. I can wget the file that it's complaining about, though perhaps it's trying to get it earlier when it doesn't exist (or is empty). Contents of that file look like this: {"version":"1.0","timestampUTC":"2016-06-01T17:40:34Z","aggregateStatus":{"guestAgentStatus":{"version":"WALinuxAgent-2.0.16","status":"Ready","formattedMessage":{"lang ":"en-US","message":"GuestAgent is running and accepting new configurations."}},"handlerAggregateStatus":[]}} I've let machines sit like this for hours, but they don't seem to unstick themselves. My quick solution is to ssh onto a 'pending' machine and reboot it. The subsequent boot triggers cloud-init and it finishes whatever it's supposed to do. Juju then takes over and all is good. Is a waagent failure (if that is indeed the problem) something that cloud-init can watch for and handle? Logs coming shortly... ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1588014 Title: cloud-init doesn't always complete on azure machines Status in cloud-init: New Bug description: Sometimes (like 1 in 100 times), my Juju/Azure deployment hangs with a machine stuck in 'pending'. On these machines, cloud-init-output.log says: ... Setting up qemu-utils (2.0.0+dfsg-2ubuntu1.24) ... Setting up cloud-image-utils (0.27-0ubuntu9.2) ... Setting up cloud-utils (0.2
[Yahoo-eng-team] [Bug 1588007] [NEW] The created_at and updated_at filters don't work with a postgresql db
Public bug reported: When listing images, several optional parameters can be used to filter the list of images retrieved by the API, these parameters can be found in the API documentation (http://developer.openstack.org/api-ref- image-v2.html#listImages-v2). As part of those filters, there are two that can be used with date time stamps in the ISO 8601 DateTime notation, these are the created_at and updated_at parameters. These created_at and updated_at parameters can be used in the following two ways: - Without specifying an operator (which defaults to an eq operator) e.g. /v2/images?created_at=2016-04-18T21:38:55Z Note: This is currently not working, see bug https://bugs.launchpad.net/glance/+bug/1584415 - With an operator (using one of these: eq, new, lt, lte, gt, gte) e.g. /v2/images?created_at=lte:2016-04-18T21:38:55Z This works as documented when run in an OpenStack environment that has the default backend DataBase of Mysql, but if the environment has a Postgresql database the created_at and updated_at filters always return empty lists even when the filter criteria matches existing images. Steps to reproduce: 1) Install a devstack environment with the default db backend (mysql). 2) Try listing the images using the created_at or updated_at parameters, using an operator and using the DateTime stamp of an existing image. Like this: /v2/images?created_at=lte:2016-04-18T21:38:55Z. Expected Results: Glance API should return a list with 1 or more images that match the filter criteria. 3) Install a devstack environment with the postgresql DB backend (https://github.com/openstack-dev/devstack/blob/master/doc/source/configuration.rst#database-backend). 4) Try listing the images again using the same API call used in step 2 against the new environment. Expected Results: Glance API should work the same as with the mysql DB returning a list of 1 or more images that match the filter criteria. Actual Results: An empty list is always returned, even when the filter matches existing images. Extra note: You can also reproduce this bug using the parameters without using an operator but you would need to use the patch https://review.openstack.org/#/c/319682/ that is currently in review. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1588007 Title: The created_at and updated_at filters don't work with a postgresql db Status in Glance: New Bug description: When listing images, several optional parameters can be used to filter the list of images retrieved by the API, these parameters can be found in the API documentation (http://developer.openstack.org/api-ref- image-v2.html#listImages-v2). As part of those filters, there are two that can be used with date time stamps in the ISO 8601 DateTime notation, these are the created_at and updated_at parameters. These created_at and updated_at parameters can be used in the following two ways: - Without specifying an operator (which defaults to an eq operator) e.g. /v2/images?created_at=2016-04-18T21:38:55Z Note: This is currently not working, see bug https://bugs.launchpad.net/glance/+bug/1584415 - With an operator (using one of these: eq, new, lt, lte, gt, gte) e.g. /v2/images?created_at=lte:2016-04-18T21:38:55Z This works as documented when run in an OpenStack environment that has the default backend DataBase of Mysql, but if the environment has a Postgresql database the created_at and updated_at filters always return empty lists even when the filter criteria matches existing images. Steps to reproduce: 1) Install a devstack environment with the default db backend (mysql). 2) Try listing the images using the created_at or updated_at parameters, using an operator and using the DateTime stamp of an existing image. Like this: /v2/images?created_at=lte:2016-04-18T21:38:55Z. Expected Results: Glance API should return a list with 1 or more images that match the filter criteria. 3) Install a devstack environment with the postgresql DB backend (https://github.com/openstack-dev/devstack/blob/master/doc/source/configuration.rst#database-backend). 4) Try listing the images again using the same API call used in step 2 against the new environment. Expected Results: Glance API should work the same as with the mysql DB returning a list of 1 or more images that match the filter criteria. Actual Results: An empty list is always returned, even when the filter matches existing images. Extra note: You can also reproduce this bug using the parameters without using an operator but you would need to use the patch https://review.openstack.org/#/c/319682/ that is currently in review. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1588007/+subscriptions -- Mailing list:
[Yahoo-eng-team] [Bug 1588003] [NEW] Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration
Public bug reported: The _compare_cpu() method of Nova's libvirt driver performs guest vCPU model to destination host CPU model comparison (during live migration) even in the case of emulated (QEMU "TCG" mode) guests, where the CPU instructions are emulated completely in software, and no hardware acceleration, such as KVM is involved. >From nova/virt/libvirt/driver.py: [...] 5464 def _compare_cpu(self, guest_cpu, host_cpu_str, instance): 5465 """Check the host is compatible with the requested CPU [...][...] 5481 if CONF.libvirt.virt_type not in ['qemu', 'kvm']: 5482 return 5483 Skip the comparison for 'qemu' part above. Fix for master branch is here: https://review.openstack.org/#/c/323467/ -- libvirt: Skip CPU compatibility check for emulated guests This bug is for stable branch backports: Mitaka and Liberty. [Thanks: Daniel P. Berrange for the pointer.] Related context and references -- (a) This upstream discussion thread where using the custom CPU model ("gate64") is causing live migration CI jobs to fail. http://lists.openstack.org/pipermail/openstack-dev/2016-May/095811.html -- "[gate] [nova] live migration, libvirt 1.3, and the gate" (b) Gate DevStack change to avoid setting the custom CPU model in nova.conf https://review.openstack.org/#/c/320925/4 -- don't set libvirt cpu_model ** Affects: nova Importance: High Assignee: Kashyap Chamarthy (kashyapc) Status: New ** Changed in: nova Assignee: (unassigned) => Kashyap Chamarthy (kashyapc) ** Changed in: nova Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1588003 Title: Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration Status in OpenStack Compute (nova): New Bug description: The _compare_cpu() method of Nova's libvirt driver performs guest vCPU model to destination host CPU model comparison (during live migration) even in the case of emulated (QEMU "TCG" mode) guests, where the CPU instructions are emulated completely in software, and no hardware acceleration, such as KVM is involved. From nova/virt/libvirt/driver.py: [...] 5464 def _compare_cpu(self, guest_cpu, host_cpu_str, instance): 5465 """Check the host is compatible with the requested CPU [...][...] 5481 if CONF.libvirt.virt_type not in ['qemu', 'kvm']: 5482 return 5483 Skip the comparison for 'qemu' part above. Fix for master branch is here: https://review.openstack.org/#/c/323467/ -- libvirt: Skip CPU compatibility check for emulated guests This bug is for stable branch backports: Mitaka and Liberty. [Thanks: Daniel P. Berrange for the pointer.] Related context and references -- (a) This upstream discussion thread where using the custom CPU model ("gate64") is causing live migration CI jobs to fail. http://lists.openstack.org/pipermail/openstack-dev/2016-May/095811.html -- "[gate] [nova] live migration, libvirt 1.3, and the gate" (b) Gate DevStack change to avoid setting the custom CPU model in nova.conf https://review.openstack.org/#/c/320925/4 -- don't set libvirt cpu_model To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1588003/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587992] [NEW] Services panels are not plugable
Public bug reported: Services Panels are currently hard coded to a list of projects Other projects who have dashboard plugins also have similar services heartbeat data and should show it to admins ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1587992 Title: Services panels are not plugable Status in OpenStack Dashboard (Horizon): New Bug description: Services Panels are currently hard coded to a list of projects Other projects who have dashboard plugins also have similar services heartbeat data and should show it to admins To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1587992/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587990] [NEW] Quotas panels are not plugable
Public bug reported: Currently the quotas panels are hardcoded to a set of projects, and dashboard plugins cannot set quotas. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1587990 Title: Quotas panels are not plugable Status in OpenStack Dashboard (Horizon): New Bug description: Currently the quotas panels are hardcoded to a set of projects, and dashboard plugins cannot set quotas. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1587990/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1136936] Re: growpart and cloud-utils should support growing mounted filesystem
** Changed in: cloud-utils Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1136936 Title: growpart and cloud-utils should support growing mounted filesystem Status in cloud-init: Fix Released Status in cloud-utils: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-utils package in Ubuntu: Fix Released Bug description: Under bug 1096999, we added support to util linux 'partx' to update the partition table information of a disk with a mounted partition. This takes advantage of kernel feature in 3.8.0. This actually removes the necessity of cloud-initramfs-growpart. We can now put that function into growpart and cloud-init instead. By doing so, we can actually make it able to be disabled from user-data, and not require ramdisk code. Links: * util-linux upstream: http://git.kernel.org/?p=utils/util-linux/util-linux.git;a=commitdiff;h=3b905b794e93609af7e42459d32b27e7c18ce02e * kernel upstream: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=c83f6bf98dc1f1a194118b3830706cebbebda8c4 ProblemType: Bug DistroRelease: Ubuntu 13.04 Package: cloud-utils 0.26-0ubuntu2 ProcVersionSignature: Ubuntu 3.5.0-21.32-generic 3.5.7.1 Uname: Linux 3.5.0-21-generic x86_64 ApportVersion: 2.8-0ubuntu4 Architecture: amd64 Date: Thu Feb 28 22:27:54 2013 EcryptfsInUse: Yes InstallationDate: Installed on 2011-10-19 (498 days ago) InstallationMedia: Ubuntu 11.10 "Oneiric Ocelot" - Release amd64 (20111012) MarkForUpload: True PackageArchitecture: all ProcEnviron: TERM=xterm PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: cloud-utils UpgradeStatus: Upgraded to raring on 2013-01-07 (52 days ago) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1136936/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587985] [NEW] Glance v2 allows to set locations if image has saving status
Public bug reported: Currently, if 'show_multiple_locations' is activated, user can set custom location to an image, even if it has 'saving' or 'deactivated' status. Example: http://paste.openstack.org/show/506998/ In v1 this request returns 400, but imho 409 is more appropriate response code. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1587985 Title: Glance v2 allows to set locations if image has saving status Status in Glance: New Bug description: Currently, if 'show_multiple_locations' is activated, user can set custom location to an image, even if it has 'saving' or 'deactivated' status. Example: http://paste.openstack.org/show/506998/ In v1 this request returns 400, but imho 409 is more appropriate response code. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1587985/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1567549] Re: SR-IOV VF passthrough does not properly update status of parent PF upon freeing VF
Reviewed: https://review.openstack.org/303012 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b9858b2981fbdc52c0721d29cc15abce93371544 Submitter: Jenkins Branch:master commit b9858b2981fbdc52c0721d29cc15abce93371544 Author: Nikola Dipanov Date: Thu Apr 7 18:53:52 2016 +0100 pci: Make sure PF is 'available' when last VF is freed We were adding the PF back to the pools but not setting the status properly. The reason this was not caught by tests is because tests were broken as well (we were using assertTrue instead of assertEqual which always passes). Change-Id: I62d4a810d8d7c4453865db0290029c269225c139 Closes-bug: #1567549 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1567549 Title: SR-IOV VF passthrough does not properly update status of parent PF upon freeing VF Status in OpenStack Compute (nova): Fix Released Bug description: Assigning an SR-IOV VF device to an instance when PFs are whitelisted too correctly marks the PF as unavailable if one of it's VFs got assigned. However when we delete the instance, the PF is not makred as available. Steps to reproduce: 1) Whitelist PFs and VFs in nova.conf (as explained in the docs) for example pci_passthrough_whitelist = [{"product_id":"1520", "vendor_id":"8086", "physical_network":"phynet"}, {"product_id":"1521", "vendor_id":"8086", "physical_network":"phynet"}] # Both pfs and vfs are whitelisted 2) Add an alias to assign a VF pci_alias = {"name": "vf", "device_type": "type-VF"} 3) Set up a flavor with an alias extra_spec $ nova flavor-key 2 set "pci_passthrough:alias"="vf:1" 4) Boot an instance with the said flavor and observe a VF being set to 'allocated' and a PF being set to 'unavailable' select * from pci_devices where deleted=0; 5) Delete the instance from step 4 and observe that the VF has been made available but the PF is still 'unavailable'. Both should be back to available if this was the only VF used. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1567549/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587973] [NEW] DhcpAgentSchedulerDbMixin.get_dhcp_agents_hosting_networks fails for many networks
Public bug reported: DhcpAgentSchedulerDbMixin.get_dhcp_agents_hosting_networks takes network_ids as a parameter but fails when more than one network_id is passed in because of a wrong usage of sql alchemy in clause. ** Affects: neutron Importance: Undecided Assignee: Brandon Logan (brandon-logan) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587973 Title: DhcpAgentSchedulerDbMixin.get_dhcp_agents_hosting_networks fails for many networks Status in neutron: New Bug description: DhcpAgentSchedulerDbMixin.get_dhcp_agents_hosting_networks takes network_ids as a parameter but fails when more than one network_id is passed in because of a wrong usage of sql alchemy in clause. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1587973/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587821] Re: Absolute formatting used in help texts
Nova seems to have the same issue (bit less visible): http://docs.openstack.org/developer/nova/sample_config.html F.E. config_drive_skip_versions ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1587821 Title: Absolute formatting used in help texts Status in Glance: In Progress Status in OpenStack Compute (nova): New Status in osprofiler: New Bug description: Absolute formatting breaks configgenerator generated example configs. Good example what happens is here: https://review.openstack.org/#/c/323661/1/etc/glance-api.conf The cache related options and [profiler] section helps gets broken due to line lenghts. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1587821/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1565785] Fix merged to nova (master)
Reviewed: https://review.openstack.org/301859 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=c469b8466fc5ff5514957a0fbd17d141761774c8 Submitter: Jenkins Branch:master commit c469b8466fc5ff5514957a0fbd17d141761774c8 Author: Nikola Dipanov Date: Tue Apr 5 18:09:53 2016 +0100 pci: make sure device relationships are kept in memory `pci_devs` attribute of PciDevTracker class is the in-memory "master copy" of all devices on each compute host, and all data changes that happen when claiming/allocating/freeing devices HAVE TO be made against instances contained in `pci_devs` list, because they are periodically flushed to the DB when the save() method is called. Due to this we need to make sure all the relationships are available to the code using them (claiming/allocation/freeing methods). We do this by simply keeping a tree structure by referencing parent/children from objects themselves. This is done on every update of the state of PCI devices (on compute service start up, and on every resource tracker pass), so that this information is always as up to date as the in memory view of devices. This change adds the code to build up the tree, and subsequent changes will make sure the newly added relationships are used when needed. We also add 2 non-versioned fields added to PciDevice object to hold the references. Co-Authored-By: Sahid Ferdjaoui Change-Id: Id6868b7839efb2cd53f5f7aaac2c55d169356ce4 Partial-bug: #1565785 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1565785 Title: SR-IOV PF passthrough device claiming/allocation does not work for physical functions devices Status in OpenStack Compute (nova): Fix Released Bug description: Enable PCI passthrough on a compute host (whitelist devices explained in more detail in the docs), and create a network, subnet and a port that represents a SR-IOV physical function passthrough: $ neutron net-create --provider:physical_network=phynet --provider:network_type=flat sriov-net $ neutron subnet-create sriov-net 192.168.2.0/24 --name sriov-subne $ neutron port-create sriov-net --binding:vnic_type=direct-physical --name pf After that try to boot an instance using the created port (provided the pci_passthrough_whitelist was setup correctly) this should work: $ boot --image xxx --flavor 1 --nic port-id=$PORT_ABOVE testvm My test env has 2 PFs with 7 VFs each, after spawning an instance, the PF gets marked as allocated, but non of the VFs do, even though they are removed from the host (note that device_pools are correctly updated. So after the instance was successfully booted we get MariaDB [nova]> select count(*) from pci_devices where status="available" and deleted=0; +--+ | count(*) | +--+ | 15 | +--+ # This should be 8 - we are leaking 7 VFs belonging to the attached PF that never get updated. MariaDB [nova]> select pci_stats from compute_nodes; | pci_stats | {"nova_object.version": "1.1", "nova_object.changes": ["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": [{"nova_object.version": "1.1", "nova_object.changes": ["count", "numa_ node", "vendor_id", "product_id", "tags"], "nova_object.name": "PciDevicePool", "nova_object.data": {"count": 1, "numa_node": 0, "vendor_id": "8086", "product_id": "1521", "tags": {"dev_type": "type-PF", "physical _network": "phynet"}}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["count", "numa_node", "vendor_id", "product_id", "tags"], "nova_object.name": "PciDevicePool", "nova_ object.data": {"count": 7, "numa_node": 0, "vendor_id": "8086", "product_id": "1520", "tags": {"dev_type": "type-VF", "physical_network": "phynet"}}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "n ova"} | This is correct - shows 8 available devices Once a new resource_tracker run happens we hit https://bugs.launchpad.net/nova/+bug/1565721 so we stop updating based on what is found on the host. The root cause of this is (I believe) that we update PCI objects in the local scope, but never call save() on those particular instances. So we grap and update the status here: https://github.com/opensta
[Yahoo-eng-team] [Bug 1568400] Re: Pecan does not route to QoS extension
yeah it is fixed from that, although the sub resources for qos are still not working, like bandwidthlimitrules, but thats a separate issue. ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1568400 Title: Pecan does not route to QoS extension Status in neutron: Fix Released Bug description: When using the pecan wsgi framework and the QoS extension and service plugin, any requests to /v2.0/qos/policies.json fails with a 400. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1568400/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587951] [NEW] pure project admin can't view projects
Public bug reported: With Domains enabled, if you create a Project, with a user who is an admin on that project (but not a domain admin), the Identity > Projects panel will return a 500 error: Pure project admin doesn't have a domain token Internal Server Error: /identity/ Traceback (most recent call last): File "/Users/rpeters/openstack/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/rpeters/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/horizon/decorators.py", line 52, in dec return view_func(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 71, in view return self.dispatch(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 89, in dispatch return handler(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 215, in get handled = self.construct_tables() File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 206, in construct_tables handled = self.handle_table(table) File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 121, in handle_table data = self._get_data_dict() File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 243, in _get_data_dict self._data = {self.table_class._meta.name: self.get_data()} File "/Users/rpeters/openstack/horizon/openstack_dashboard/dashboards/identity/projects/views.py", line 115, in get_data t.domain_name = domain_lookup.get(t.domain_id) AttributeError: 'NoneType' object has no attribute 'get' [01/Jun/2016 15:10:37] "GET /identity/ HTTP/1.1" 500 324035 This is due to this section of code in openstack_dashboard/dashboards/identity/projects/views.py returning None for domain_lookup, making the .get() grumpy. if api.keystone.VERSIONS.active >= 3: domain_lookup = api.keystone.domain_lookup(self.request) for t in tenants: t.domain_name = domain_lookup.get(t.domain_id) return tenants ** Affects: horizon Importance: Undecided Assignee: Ryan Peters (rjpeter2) Status: New ** Changed in: horizon Assignee: (unassigned) => Ryan Peters (rjpeter2) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1587951 Title: pure project admin can't view projects Status in OpenStack Dashboard (Horizon): New Bug description: With Domains enabled, if you create a Project, with a user who is an admin on that project (but not a domain admin), the Identity > Projects panel will return a 500 error: Pure project admin doesn't have a domain token Internal Server Error: /identity/ Traceback (most recent call last): File "/Users/rpeters/openstack/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/rpeters/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/horizon/decorators.py", line 52, in dec return view_func(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 71, in view return self.dispatch(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 89, in dispatch return handler(request, *args, **kwargs) File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 215, in get handled = self.construct_tables() File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 206, in construct_tables handled = self.handle_table(table) File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 121, in handle_table data = self._get_data_dict() File "/Users/rpeters/openstack/horizon/horizon/tables/views.py", line 243, in _get_data_dict self._data = {self.table_class._meta.name: self.get_data()} File "/Users/rpeters/openstack/horizon/openstack_dashboard/dashboards/identity/projects/views.py", line 115, in get_data t.domain_name = domain_lookup.get(t.domain_id) AttributeError: 'NoneType' object has no a
[Yahoo-eng-team] [Bug 1587944] [NEW] incorrect title of quota:vif_outbound_peak
Public bug reported: in metadefs/compute-quota.json: ... "quota:vif_outbound_burst": { "title": "Quota: VIF Outbound Burst", "description": "Network Virtual Interface (VIF) outbound burst in total kilobytes. Specifies the amount of bytes that can be burst at peak speed.", "type": "integer" }, "quota:vif_outbound_peak": { "title": "Quota: VIF Outbound Burst", "description": "Network Virtual Interface (VIF) outbound peak in kilobytes per second. Specifies maximum rate at which an interface can send data.", "type": "integer" } ... the title of if_outbound_peak should be "Quota: VIF Outbound Peak": 102c102 < "title": "Quota: VIF Outbound Burst", --- > "title": "Quota: VIF Outbound Peak", ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1587944 Title: incorrect title of quota:vif_outbound_peak Status in Glance: New Bug description: in metadefs/compute-quota.json: ... "quota:vif_outbound_burst": { "title": "Quota: VIF Outbound Burst", "description": "Network Virtual Interface (VIF) outbound burst in total kilobytes. Specifies the amount of bytes that can be burst at peak speed.", "type": "integer" }, "quota:vif_outbound_peak": { "title": "Quota: VIF Outbound Burst", "description": "Network Virtual Interface (VIF) outbound peak in kilobytes per second. Specifies maximum rate at which an interface can send data.", "type": "integer" } ... the title of if_outbound_peak should be "Quota: VIF Outbound Peak": 102c102 < "title": "Quota: VIF Outbound Burst", --- > "title": "Quota: VIF Outbound Peak", To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1587944/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561188] Re: DB api: remove deprecated methods
Reviewed: https://review.openstack.org/295706 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=15d1612a993f80b9b6be1a0ac6738eecc6bb8c52 Submitter: Jenkins Branch:master commit 15d1612a993f80b9b6be1a0ac6738eecc6bb8c52 Author: Gary Kotton Date: Tue Mar 22 03:16:38 2016 -0700 DB: remove deprecated warnings Remove object API's that are marked for deprecation in N. These were marked as deprecated in commit 4b227c3771eba1cbaa27c6c33829108981cd9b69 Closes-bug: #1561188 Change-Id: I720d6952bb3297099c165f2ff95362b428deaea2 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561188 Title: DB api: remove deprecated methods Status in neutron: Fix Released Bug description: Tracker for removing the deprecated methods in neutron/db/api.py. The commit where these were added is 4b227c3771eba1cbaa27c6c33829108981cd9b69 : * get_object * get_objects * create_object * _safe_get_object * update_object * delete_object To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561188/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1585373] Re: qos-policy update without specify --shared causing it change to default False
** Project changed: networking-qos => neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1585373 Title: qos-policy update without specify --shared causing it change to default False Status in neutron: Confirmed Bug description: update policy 3k-bm-limiter as shared policy. update policy 3k-bm-limiter with only name field, causing default field shared=False being used. Here is the console log: nicira@newton-devstack:~$ neutron qos-policy-show 3k-bm-limiter +-+--+ | Field | Value| +-+--+ | description | bw-limit 3k | | id | 163c5fc1-7bf2-455b-a92c-4118fc612822 | | name| 3k-bm-limiter| | rules | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) | | shared | False| | tenant_id | 1cf34eba3d3240a68966ef61567c5650 | +-+--+ nicira@newton-devstack:~$ neutron qos-policy-update --shared 3k-bm-limiter Updated policy: 3k-bm-limiter nicira@newton-devstack:~$ neutron qos-policy-show 3k-bm-limiter +-+--+ | Field | Value| +-+--+ | description | bw-limit 3k | | id | 163c5fc1-7bf2-455b-a92c-4118fc612822 | | name| 3k-bm-limiter| | rules | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) | | shared | True | | tenant_id | 1cf34eba3d3240a68966ef61567c5650 | +-+--+ nicira@newton-devstack:~$ neutron qos-policy-update --name=bw-limiter 3k-bm-limiter Updated policy: 3k-bm-limiter nicira@newton-devstack:~$ neutron qos-policy-show bw-limiter +-+--+ | Field | Value| +-+--+ | description | bw-limit 3k | | id | 163c5fc1-7bf2-455b-a92c-4118fc612822 | | name| bw-limiter | | rules | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) | | shared | False| | tenant_id | 1cf34eba3d3240a68966ef61567c5650 | +-+--+ nicira@newton-devstack:~$ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1585373/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1585373] [NEW] qos-policy update without specify --shared causing it change to default False
You have been subscribed to a public bug: update policy 3k-bm-limiter as shared policy. update policy 3k-bm-limiter with only name field, causing default field shared=False being used. Here is the console log: nicira@newton-devstack:~$ neutron qos-policy-show 3k-bm-limiter +-+--+ | Field | Value| +-+--+ | description | bw-limit 3k | | id | 163c5fc1-7bf2-455b-a92c-4118fc612822 | | name| 3k-bm-limiter| | rules | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) | | shared | False| | tenant_id | 1cf34eba3d3240a68966ef61567c5650 | +-+--+ nicira@newton-devstack:~$ neutron qos-policy-update --shared 3k-bm-limiter Updated policy: 3k-bm-limiter nicira@newton-devstack:~$ neutron qos-policy-show 3k-bm-limiter +-+--+ | Field | Value| +-+--+ | description | bw-limit 3k | | id | 163c5fc1-7bf2-455b-a92c-4118fc612822 | | name| 3k-bm-limiter| | rules | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) | | shared | True | | tenant_id | 1cf34eba3d3240a68966ef61567c5650 | +-+--+ nicira@newton-devstack:~$ neutron qos-policy-update --name=bw-limiter 3k-bm-limiter Updated policy: 3k-bm-limiter nicira@newton-devstack:~$ neutron qos-policy-show bw-limiter +-+--+ | Field | Value| +-+--+ | description | bw-limit 3k | | id | 163c5fc1-7bf2-455b-a92c-4118fc612822 | | name| bw-limiter | | rules | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) | | shared | False| | tenant_id | 1cf34eba3d3240a68966ef61567c5650 | +-+--+ nicira@newton-devstack:~$ ** Affects: neutron Importance: Undecided Assignee: ugvddm (271025598-9) Status: Confirmed ** Tags: qos -- qos-policy update without specify --shared causing it change to default False https://bugs.launchpad.net/bugs/1585373 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587922] [NEW] miss exception handlers if integration test is skipped
Public bug reported: Right now exception handlers catch skipped tests too, because a skip is a instance of exception. So we need to take this into account when aggregating results ** Affects: horizon Importance: Undecided Assignee: Sergei Chipiga (schipiga) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1587922 Title: miss exception handlers if integration test is skipped Status in OpenStack Dashboard (Horizon): In Progress Bug description: Right now exception handlers catch skipped tests too, because a skip is a instance of exception. So we need to take this into account when aggregating results To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1587922/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1544295] Re: Unable to place static in panel directory
Reviewed: https://review.openstack.org/246683 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=b2d7322e56197e7d8b324e1335bbe544fd2ee5d3 Submitter: Jenkins Branch:master commit b2d7322e56197e7d8b324e1335bbe544fd2ee5d3 Author: Thai Tran Date: Wed Feb 10 13:48:31 2016 -0800 Panel static finder We are only able to store static content at the dashboard (app) level. This forces us to store all of our statics under the dashboard's static folder. This patch introduces a panel static finder. Django's static finder will now find files inside the panel's static folder as well. This is more inline with what we have today for python plugins. Change-Id: I2bb3f78abf8854dbad8f1697d942f94d36014d41 Closes-Bug: #1544295 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1544295 Title: Unable to place static in panel directory Status in OpenStack Dashboard (Horizon): Fix Released Bug description: We are able to currently have a static folder in dashboard but not a panel (which is a subdirectory of dashboard). It would be much nicer to be able to place it under each panel instead. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1544295/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587901] [NEW] Cant get listener connected with l7policy through neutron api
Public bug reported: I'm in process of adding l7policy and l7rule resources support to Heat. In according plugins sometimes I need to check loadbalancer status. When I try to do such check from l7rule, I have only id of l7policy. And I have no way to get loadbalancer id through api. Neutron api do not returns neither the listener from show_lbaas_l7policy nor l7policies from list_listeners/show_listener. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587901 Title: Cant get listener connected with l7policy through neutron api Status in neutron: New Bug description: I'm in process of adding l7policy and l7rule resources support to Heat. In according plugins sometimes I need to check loadbalancer status. When I try to do such check from l7rule, I have only id of l7policy. And I have no way to get loadbalancer id through api. Neutron api do not returns neither the listener from show_lbaas_l7policy nor l7policies from list_listeners/show_listener. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1587901/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587893] [NEW] Infinity quota values are not parsed by JSON.parse for getAbsoluteLimits
Public bug reported: When some of the values for Cinder quota are set to -1 (which means, "infinity" quota), REST API call for getAbsoluteLimits [1] crashes. That JavaScript function makes a request to http://localshost/api/cinder/tenantabsolutelimits/ That request, generates a response, like: {"totalSnapshotsUsed": 0, "total_volumes_standard": 10, "maxTotalBackups": 10, "totalBackupsUsed": 0, "maxTotalBackupGigabytes": 1000, "maxTotalVolumes": 10, "used_volumes_standard": 1, "maxTotalVolumeGigabytes": 1000, "totalVolumesUsed": 1, "total_gigabytes_standard": 1000, "totalBackupGigabytesUsed": 0, "maxTotalSnapshots": 0, "used_gigabytes_standard": 1, "totalGigabytesUsed": 1} So far, so good. Now, update the cinder quota for your project, for example, the snapshots quota, to -1. The response changes to: {"totalSnapshotsUsed": 0, "total_volumes_standard": 10, "maxTotalBackups": 10, "totalBackupsUsed": 0, "maxTotalBackupGigabytes": 1000, "maxTotalVolumes": 10, "used_volumes_standard": 1, "maxTotalVolumeGigabytes": 1000, "totalVolumesUsed": 1, "total_gigabytes_standard": 1000, "totalBackupGigabytesUsed": 0, "maxTotalSnapshots": Infinity, "used_gigabytes_standard": 1, "totalGigabytesUsed": 1} Please note that "maxTotalSnapshots": Infinity. That dictionary, for python, is not a problem. But, for the JavaScript parser, is a big issue. Now, please open your browser, for example, Firefox and open the Console (from Developers tool) and execute this in the console: >> JSON.parse('{"totalSnapshotsUsed": 0, "total_volumes_standard": 10, "maxTotalBackups": 10, "totalBackupsUsed": 0, "maxTotalBackupGigabytes": 1000, "maxTotalVolumes": 10, "used_volumes_standard": 1, "maxTotalVolumeGigabytes": 1000, "totalVolumesUsed": 1, "total_gigabytes_standard": 1000, "totalBackupGigabytesUsed": 0, "maxTotalSnapshots": 0, "used_gigabytes_standard": 1, "totalGigabytesUsed": 1}') That worked fine, no issues, but now try this (with infinity quota for snapshots): >> JSON.parse('{"totalSnapshotsUsed": 0, "total_volumes_standard": 10, "maxTotalBackups": 10, "totalBackupsUsed": 0, "maxTotalBackupGigabytes": 1000, "maxTotalVolumes": 10, "used_volumes_standard": 1, "maxTotalVolumeGigabytes": 1000, "totalVolumesUsed": 1, "total_gigabytes_standard": 1000, "totalBackupGigabytesUsed": 0, "maxTotalSnapshots": Infinity, "used_gigabytes_standard": 1, "totalGigabytesUsed": 1}') It fails raising this message: SyntaxError: JSON.parse: unexpected character at line 1 column 329 of the JSON data. Summarizing, the issue in Horizon is due to -1 values in cinder quota, which is translated to python to infinity values (see here [2]) and the "Infinity" values are not a proper value for JSON.parse function. [1] https://github.com/openstack/horizon/blob/e1f07e27944b505dec57dda20d3c3b13eb3bb4d7/openstack_dashboard/static/app/core/openstack-service-api/cinder.service.js#L284 [2] https://github.com/openstack/horizon/blob/e1f07e27944b505dec57dda20d3c3b13eb3bb4d7/openstack_dashboard/api/cinder.py#L779 ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1587893 Title: Infinity quota values are not parsed by JSON.parse for getAbsoluteLimits Status in OpenStack Dashboard (Horizon): New Bug description: When some of the values for Cinder quota are set to -1 (which means, "infinity" quota), REST API call for getAbsoluteLimits [1] crashes. That JavaScript function makes a request to http://localshost/api/cinder/tenantabsolutelimits/ That request, generates a response, like: {"totalSnapshotsUsed": 0, "total_volumes_standard": 10, "maxTotalBackups": 10, "totalBackupsUsed": 0, "maxTotalBackupGigabytes": 1000, "maxTotalVolumes": 10, "used_volumes_standard": 1, "maxTotalVolumeGigabytes": 1000, "totalVolumesUsed": 1, "total_gigabytes_standard": 1000, "totalBackupGigabytesUsed": 0, "maxTotalSnapshots": 0, "used_gigabytes_standard": 1, "totalGigabytesUsed": 1} So far, so good. Now, update the cinder quota for your project, for example, the snapshots quota, to -1. The response changes to: {"totalSnapshotsUsed": 0, "total_volumes_standard": 10, "maxTotalBackups": 10, "totalBackupsUsed": 0, "maxTotalBackupGigabytes": 1000, "maxTotalVolumes": 10, "used_volumes_standard": 1, "maxTotalVolumeGigabytes": 1000, "totalVolumesUsed": 1, "total_gigabytes_standard": 1000, "totalBackupGigabytesUsed": 0, "maxTotalSnapshots": Infinity, "used_gigabytes_standard": 1, "totalGigabytesUsed": 1} Please note that "maxTotalSnapshots": Infinity. That dictionary, for python, is not a problem. But, for the JavaScript parser, is a big issue. Now, please open your browser, for example, Firefox and open the Console (from Developers tool) and execute this in the console: >> JSON.parse('{"totalSnapshotsUsed": 0, "total_volumes_standa
[Yahoo-eng-team] [Bug 1578233] Re: functional tests fail with native ovsdb_interface enabled
Reviewed: https://review.openstack.org/312697 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=bffc5f062c15757b58a597e0f1e380aa4d9166c6 Submitter: Jenkins Branch:master commit bffc5f062c15757b58a597e0f1e380aa4d9166c6 Author: Inessa Vasilevskaya Date: Wed May 4 17:36:47 2016 +0300 functional: fix OVSFW failure with native OVSDB api A bunch of functional tests fail because of non implemented x != [] operation in idlutils.condition_match() and wrong condition passed to db_find() in OVSFW test. This patch addresses the issue by implementing lists comparison in native.idlutils and fixing the call to db_find() in OVSFW test. A functional test for OVSDB API's db_find() has been added to ensure that querying a list column gives the same result both with vsctl and native ovsdb_interface; unit test for idlutils.condition_match() with corner cases has been added as well. Change-Id: Ia93fb925b8814210975904a453249f15f3646855 Closes-bug: #1578233 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1578233 Title: functional tests fail with native ovsdb_interface enabled Status in neutron: Fix Released Bug description: When ovsdb_interface in config is set to native, a bunch of functional tests fails because of non-implemented comparison-to-list in search query (https://github.com/openstack/neutron/blob/master/neutron/agent/ovsdb/native/idlutils.py#L169) Typical output for neutron.tests.functional.agent.test_firewall tests ft29.1: neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_allowed_address_pairs(OVS Firewall Driver)_StringException: Empty attachments: pythonlogging:'' stderr stdout Traceback (most recent call last): File "neutron/tests/functional/agent/test_firewall.py", line 98, in setUp self.assign_vlan_to_peers() File "neutron/tests/functional/agent/test_firewall.py", line 132, in assign_vlan_to_peers vlan = self.get_not_used_vlan() File "neutron/tests/functional/agent/test_firewall.py", line 140, in get_not_used_vlan used_vlan_tags = {val['tag'] for val in port_vlans} File "neutron/tests/functional/agent/test_firewall.py", line 140, in used_vlan_tags = {val['tag'] for val in port_vlans} TypeError: unhashable type: 'list' To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1578233/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587847] Re: flavorRef is missing from the parameter list of Resize server
This is api-ref for nova issue. ** Project changed: openstack-api-site => nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1587847 Title: flavorRef is missing from the parameter list of Resize server Status in OpenStack Compute (nova): New Bug description: According to the example Change administrative password has a parameter called "flavorRef". This "flavorRef" parameter is not listed in the Parameter list of Change administrative password in the web API reference [1] and not mentioned in Chapter 4.4.20 of the pdf API reference [2]. [1]: http://developer.openstack.org/api-ref-compute-v2.1.html#changePassword [2]: http://api.openstack.org/api-ref-guides/bk-api-ref.pdf To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1587847/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587847] [NEW] flavorRef is missing from the parameter list of Resize server
You have been subscribed to a public bug: According to the example Change administrative password has a parameter called "flavorRef". This "flavorRef" parameter is not listed in the Parameter list of Change administrative password in the web API reference [1] and not mentioned in Chapter 4.4.20 of the pdf API reference [2]. [1]: http://developer.openstack.org/api-ref-compute-v2.1.html#changePassword [2]: http://api.openstack.org/api-ref-guides/bk-api-ref.pdf ** Affects: nova Importance: Undecided Status: New -- flavorRef is missing from the parameter list of Resize server https://bugs.launchpad.net/bugs/1587847 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587806] Re: XSS in kibana elasticsearch proxy
** Project changed: horizon => monasca ** Changed in: monasca Assignee: (unassigned) => Dobroslaw Zybort (dobroslaw-zybort) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1587806 Title: XSS in kibana elasticsearch proxy Status in Monasca: New Bug description: Detailed bug description: There is XSS in kibana elasticsearch proxy Problem does not exists on Chrome/Chromium (50.0.2661.102 Ubuntu 16.04 (64-bit)) but is observable on Firefox (46.0.1). Steps to reproduce: 1. Login the OpenStack dashboard. 2. Rewrite the URL string of the browser's address bar like below: new URL: /dashboard/monitoring/logs_proxy/elasticsearch/*/_field_stats?level=alert(1155) 3. Press the enter key. Expected results: HTML control characters, JavaScript and so on are properly escaped or rejected. Actual result: JavaScript is executed on the error page and a message box is shown. Reproducibility: 100% [Variations] The following parameters for 'level' may cause similar issues. AppScan detected these issues. - level=indicesalert(10081) - level=indices%27%22%2F%3E%3Cscript%3Ealert%2810083%29%3C%2Fscript%3E - level=indices%27%22%2F%3E%3Ciframe+src%3Djavascript%3Aalert%2810088%29+ - level=indices%27%22%2F%3E%3Ciframe+src%3Djavascript%3Aalert%2810089%29%3E - level=indices%27%22%2F%3E%3Cimg+src%3Djavascript%3Aalert%2810093%29+ - level=indices%27%22%2F%3E%3Cimg+src%3Djavascript%3Aalert%2810094%29%3E - level=indicesalert(10081) - level=indicesalert(10083) - level=indices - level=indices To manage notifications about this bug go to: https://bugs.launchpad.net/monasca/+bug/1587806/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1451991] Re: Quota related error should return 403 error
Reviewed: https://review.openstack.org/319000 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d3c9778b6ad80208daf2923ca137778a109c34bc Submitter: Jenkins Branch:master commit d3c9778b6ad80208daf2923ca137778a109c34bc Author: kevin shen <372712...@qq.com> Date: Fri May 20 09:00:10 2016 +0800 fix Quota related error return incorrect problem in Nova API layer, 400 is returned (HTTPBadRequest) if Quota is exceeded, this is not correct we should return 403 (HTTPForbidden) instead Change-Id: Ifd9d3c6db5b3ec56c744255bf26575f748e13ff3 Closes-Bug: #1451991 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1451991 Title: Quota related error should return 403 error Status in OpenStack Compute (nova): Fix Released Bug description: in Nova API layer, 400 is returned (HTTPBadRequest) if Quota is exceeded, this is not correct we should return 403 (HTTPForbidden) instead To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1451991/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587800] Re: There is a judgment repeat in neutronclient and lbaas
This is not a bug as requests can be sent via API as well as CLI, so validation on the server side is also required. ** Tags added: lbaas ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587800 Title: There is a judgment repeat in neutronclient and lbaas Status in neutron: Invalid Bug description: In lbaasv2,when I create a listener with no loadbalancer and no default_pool_id.there will be an error.But this judgment repeat in neutronclient and plugin。 codes in neutronclietn: if not parsed_args.loadbalancer and not parsed_args.default_pool: message = _('Either --default-pool or --loadbalancer must be ' 'specified.') raise exceptions.CommandError(message) codes in plugin: elif not lb_id: raise sharedpools.ListenerMustHaveLoadbalancer() To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1587800/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587858] [NEW] objects: added 'os_secure_boot' property to ImageMetaProps object
Public bug reported: https://review.openstack.org/237593 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/nova" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 3b0c9a3f14e11737fff47146b512d93116654234 Author: Simona Iuliana Toader Date: Tue Oct 20 16:52:34 2015 +0300 objects: added 'os_secure_boot' property to ImageMetaProps object Secure Boot feature will be enabled by setting the "os_secure_boot" image property to "required". Other options can be: "disabled" or "optional". DocImpact Partially Implements: blueprint hyper-v-uefi-secureboot Change-Id: Id53f934fccb020dcc6bae9e13f53cbb3df3dcd92 ** Affects: nova Importance: Undecided Status: New ** Tags: doc nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1587858 Title: objects: added 'os_secure_boot' property to ImageMetaProps object Status in OpenStack Compute (nova): New Bug description: https://review.openstack.org/237593 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/nova" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 3b0c9a3f14e11737fff47146b512d93116654234 Author: Simona Iuliana Toader Date: Tue Oct 20 16:52:34 2015 +0300 objects: added 'os_secure_boot' property to ImageMetaProps object Secure Boot feature will be enabled by setting the "os_secure_boot" image property to "required". Other options can be: "disabled" or "optional". DocImpact Partially Implements: blueprint hyper-v-uefi-secureboot Change-Id: Id53f934fccb020dcc6bae9e13f53cbb3df3dcd92 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1587858/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587719] Re: no option to view the interfaces attached to router from CLI
neutron router-port-list ** Changed in: neutron Status: New => Invalid ** Changed in: neutron Assignee: Sharat Sharma (sharat-sharma) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587719 Title: no option to view the interfaces attached to router from CLI Status in neutron: Invalid Bug description: There is no way to view the interfaces attached to a router from the CLI. To know the interfaces attached to the router, we have to rely on dashboard. So an extra field has to be added to the router-show table to display the attached interfaces. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1587719/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1584350] Re: etc/glance-registry.conf sample file has redundant store section
Reviewed: https://review.openstack.org/319564 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=a0bddc9a709e264f44b7e07270a91352ffaaa8ac Submitter: Jenkins Branch:master commit a0bddc9a709e264f44b7e07270a91352ffaaa8ac Author: Nikhil Komawar Date: Sat May 21 11:00:37 2016 -0400 Remove redundant store config from registry sample Currently, the oslo config generator takes glance_store configs in consideration while generating sample configs for the registry. Registry doesn't really need these configs. This patch removes the store config namespace from the oslo config generator's setup to avoid regeneration of store section in registry sample. Sample configs have been regenerated using `tox -e genconfig` command to make sure they reflect the change proposed. Only the glance-registry.conf file has been refreshed as a part of this commit. Closes-Bug: 1584350 Change-Id: I27c53d281dcd97a30c22a27c4833b24e1ca84f83 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1584350 Title: etc/glance-registry.conf sample file has redundant store section Status in Glance: Fix Released Bug description: Currently, the oslo config generator takes glance_store configs in consideration while generating sample configs for the registry. Registry doesn't really need these configs. It would be better to get them removed. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1584350/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587498] Re: macvtap agent does terminates when NoopFWDriver is specified via "noop" alias
Reviewed: https://review.openstack.org/323414 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=96d1d914ca0976fb6e8c7962d4d97f8553c4847e Submitter: Jenkins Branch:master commit 96d1d914ca0976fb6e8c7962d4d97f8553c4847e Author: Andreas Scheuring Date: Tue May 31 16:31:33 2016 +0200 Macvtap: Allow noop alias as FW driver The macvtap agent only works with the NoopFWDriver. If another driver is configured it terminates. Today only the explicit configuration "neutron.agent.firewall.NoopFirewallDriver" is accepted. This patch enables the macvtap agent to also accept the alias "noop" Change-Id: I0d6f0b780a3881419243f12487e8b3d10e709f6c Closes-Bug: #1587498 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587498 Title: macvtap agent does terminates when NoopFWDriver is specified via "noop" alias Status in neutron: Fix Released Bug description: The macvtap agent only works with the NoopFWDriver. If another driver is configured it terminates. Now there are two ways in configuring the NoopFWDriver 1) directly: neutron.agent.firewall.NoopFirewallDriver 2) via the alias: noop Today only the direct way is supporte, although the alias way is also valid. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1587498/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1346857] Re: HyperV driver does not implement Image cache ageing
Reviewed: https://review.openstack.org/192618 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=df499b55522ced1efd3006a394e80a7bf794c4e4 Submitter: Jenkins Branch:master commit df499b55522ced1efd3006a394e80a7bf794c4e4 Author: Adelina Tuvenie Date: Wed Jun 17 03:19:10 2015 -0700 Adds Hyper-V imagecache cleanup At the moment the Hyper-V driver's imagecache only deals with caching the images in _base. This blueprint will deal with implementing imagecache management that will allow the aging and deletion of cached images that are no longer in use. Co-Authored-By: Claudiu Belu Implements blueprint: hyper-v-imagecache-cleanup Closes-Bug: #1346857 Change-Id: I94ee60ed301f18298ae14239190218b9ef54e575 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1346857 Title: HyperV driver does not implement Image cache ageing Status in OpenStack Compute (nova): Fix Released Bug description: Nova.conf has the following options for image cache ageing remove_unused_base_images remove_unused_original_minimum_age_seconds. If these conf values are enabled, older un-used images cached on the hypervisor hosts will be deleted by nova-compute. The driver should implement the defn manage_image_cache and this will be called by compute.manager._run_image_cache_manager_pass(). The HyperV driver does not implement manage_image_cache() which means older unused images (VHD/VHDFx) will never get deleted and occupy storage on the HyperV host, which otherwise can be used to spawn instances. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1346857/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587834] [NEW] use_neutron_default_nets: StrOpt ->BoolOpt
Public bug reported: https://review.openstack.org/243061 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/nova" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit d8474e044e820f29f70641b8fd5fd590750441a3 Author: ChangBo Guo(gcb) Date: Mon Nov 9 19:56:48 2015 +0800 use_neutron_default_nets: StrOpt ->BoolOpt Config option use_neutron_default_nets is StrOpt with value 'True' or 'False'. But current method _test_network_index uses it as boolean type, this lead to can't pass when comparing it with string type 'True', it doesn't test properly. This commit makes use_neutron_default_nets as BoolOpt. DocImpact: This option is now a BoolOpt and documentation should be updated accordingly. Change-Id: I19a57db073359a9e58a16cd0de39d39aa95d2aa5 ** Affects: nova Importance: Undecided Status: New ** Tags: doc nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1587834 Title: use_neutron_default_nets: StrOpt ->BoolOpt Status in OpenStack Compute (nova): New Bug description: https://review.openstack.org/243061 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/nova" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit d8474e044e820f29f70641b8fd5fd590750441a3 Author: ChangBo Guo(gcb) Date: Mon Nov 9 19:56:48 2015 +0800 use_neutron_default_nets: StrOpt ->BoolOpt Config option use_neutron_default_nets is StrOpt with value 'True' or 'False'. But current method _test_network_index uses it as boolean type, this lead to can't pass when comparing it with string type 'True', it doesn't test properly. This commit makes use_neutron_default_nets as BoolOpt. DocImpact: This option is now a BoolOpt and documentation should be updated accordingly. Change-Id: I19a57db073359a9e58a16cd0de39d39aa95d2aa5 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1587834/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587359] Re: oslo_config.cfg.NoSuchOptError: no such option in group DEFAULT: config_dirs
Reviewed: https://review.openstack.org/323428 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a8da782051fc085fcb417c6d277a5e27586e2380 Submitter: Jenkins Branch:master commit a8da782051fc085fcb417c6d277a5e27586e2380 Author: Ihar Hrachyshka Date: Tue May 31 16:51:30 2016 +0200 Guard against config_dirs not defined on ConfigOpts Turned out that if the code extracts config_dirs value from ConfigOpts objects before config files are parsed, then oslo.config will raise NoSuchOptError exception. This is not a usual mode of operation for the code, since main() function of the process using it is expected to parse CLI and config files before using it, it may nevertheless happen in some test code. This patch guards against those exceptions, falling back to /etc/neutron, as we already do when --config-dir is not specified. Change-Id: I00cf824baa8580b7aa7ec4518a4741e49c998364 Closes-Bug: #1587359 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587359 Title: oslo_config.cfg.NoSuchOptError: no such option in group DEFAULT: config_dirs Status in bgpvpn: Confirmed Status in neutron: Fix Released Bug description: We have the following traceback in a networking-bgpvpn test [1]. 2016-05-31 09:04:33.341 | FAIL: networking_bgpvpn.tests.unit.db.test_db.BgpvpnDBTestCase.test_db_associate_twice 2016-05-31 09:04:33.341 | tags: worker-0 2016-05-31 09:04:33.342 | -- 2016-05-31 09:04:33.342 | Traceback (most recent call last): 2016-05-31 09:04:33.342 | File "networking_bgpvpn/tests/unit/db/test_db.py", line 29, in setUp 2016-05-31 09:04:33.342 | super(BgpvpnDBTestCase, self).setUp() 2016-05-31 09:04:33.342 | File "networking_bgpvpn/tests/unit/services/test_plugin.py", line 81, in setUp 2016-05-31 09:04:33.342 | {constants.BGPVPN: plugin.BGPVPNPlugin(), 2016-05-31 09:04:33.343 | File "networking_bgpvpn/neutron/services/plugin.py", line 44, in __init__ 2016-05-31 09:04:33.343 | pconf.ProviderConfiguration('networking_bgpvpn')) 2016-05-31 09:04:33.343 | File "/tmp/openstack/neutron/neutron/services/provider_configuration.py", line 209, in __init__ 2016-05-31 09:04:33.343 | for prov in parse_service_provider_opt(svc_module): 2016-05-31 09:04:33.343 | File "/tmp/openstack/neutron/neutron/services/provider_configuration.py", line 158, in parse_service_provider_opt 2016-05-31 09:04:33.343 | svc_providers_opt = neutron_mod.service_providers() 2016-05-31 09:04:33.343 | File "/tmp/openstack/neutron/neutron/services/provider_configuration.py", line 114, in service_providers 2016-05-31 09:04:33.344 | providers = self.ini().service_providers.service_provider 2016-05-31 09:04:33.344 | File "/tmp/openstack/neutron/neutron/services/provider_configuration.py", line 74, in ini 2016-05-31 09:04:33.344 | neutron_dirs = cfg.CONF.config_dirs or ['/etc/neutron'] 2016-05-31 09:04:33.344 | File "/home/jenkins/workspace/gate-networking-bgpvpn-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 2185, in __getattr__ 2016-05-31 09:04:33.344 | raise NoSuchOptError(name) 2016-05-31 09:04:33.344 | oslo_config.cfg.NoSuchOptError: no such option in group DEFAULT: config_dirs This is related to neutron commit 7f31ccb7bbe0f78a34d704c59d0562ea10029893 [2]. [1] http://logs.openstack.org/51/232451/18/check/gate-networking-bgpvpn-python27/489d2a5/console.html#_2016-05-31_09_04_33_342 [2] https://github.com/openstack/neutron/commit/7f31ccb7bbe0f78a34d704c59d0562ea10029893 To manage notifications about this bug go to: https://bugs.launchpad.net/bgpvpn/+bug/1587359/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587831] [NEW] [SNAT][HA]snat traffic broken after restarting network nodes
Public bug reported: After restarting both network nodes (l3 agent_mode=dvr_snat) at same time, both snat namespaces on the nodes can't talk to each other, and promote itself as the active one. In this case, there are 2 active snat namespaces. Then, once the one who actually takes SNAT traffic is done, the other one won't take over the responsibility. [root@zk22-01 ~]# neutron router-list +--+--+-+-+--+ | id | name | external_gateway_info | distributed | ha | +--+--+-+-+--+ | c497892b-8ff4-441d-9f4e-43fd30401930 | rt | {"network_id": "c892d21d-fea9-4d4b-b5f6-276345c7901f", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "129df259-0104 | True| True | | | | -400e-8c76-a4d9250eb9c9", "ip_address": "192.168.122.4"}]} | | | +--+--+-+-+--+ [root@zk22-01 ~]# neutron l3-agent-list-hosting-router c497892b-8ff4-441d-9f4e-43fd30401930 +--+-++---+--+ | id | host| admin_state_up | alive | ha_state | +--+-++---+--+ | be5526ce-ad40-46af-9dc8-898cf08ebe9b | zk22-01 | True | :-) | active | | dcdfc230-c5d1-4dd3-b541-a6abac6531ba | zk22-02 | True | :-) | active | +--+-++---+--+ [root@zk22-01 ~]# ip netns exec snat-c497892b-8ff4-441d-9f4e-43fd30401930 tcpdump -nn -i ha-004331fc-9f tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ha-004331fc-9f, link-type EN10MB (Ethernet), capture size 65535 bytes 18:59:03.574554 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:05.575500 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:07.576432 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:09.577361 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:11.578293 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:13.579243 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 [root@zk22-02 ~]# ip netns exec snat-c497892b-8ff4-441d-9f4e-43fd30401930 tcpdump -nn -i ha-dda33de1-3e tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ha-dda33de1-3e, link-type EN10MB (Ethernet), capture size 65535 bytes 18:59:15.918725 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:17.919038 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:19.920036 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:21.921004 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:23.922007 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 18:59:25.923017 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587831 Title: [SNAT][HA]snat traffic broken after restarting network nodes Status in neutron: New Bug description: After restarting both network nodes (l3 agent_mode=dvr_snat) at same time, both snat namespaces on the nodes can't talk to each other, and promote itself as the active one. In this case, there are 2 active snat namespaces. Then, once the one who actually takes SNAT traffic is done, the other one won't take over the responsibility. [root@zk22-01 ~]# neutron router-list +--
[Yahoo-eng-team] [Bug 1587821] [NEW] Absolute formatting used in help texts
Public bug reported: Absolute formatting breaks configgenerator generated example configs. Good example what happens is here: https://review.openstack.org/#/c/323661/1/etc/glance-api.conf The cache related options and [profiler] section helps gets broken due to line lenghts. ** Affects: glance Importance: Undecided Assignee: Erno Kuvaja (jokke) Status: In Progress ** Affects: osprofiler Importance: Undecided Status: New ** Changed in: glance Assignee: (unassigned) => Erno Kuvaja (jokke) ** Also affects: osprofiler Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1587821 Title: Absolute formatting used in help texts Status in Glance: In Progress Status in osprofiler: New Bug description: Absolute formatting breaks configgenerator generated example configs. Good example what happens is here: https://review.openstack.org/#/c/323661/1/etc/glance-api.conf The cache related options and [profiler] section helps gets broken due to line lenghts. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1587821/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587800] [NEW] There is a judgment repeat in neutronclient and lbaas
Public bug reported: In lbaasv2,when I create a listener with no loadbalancer and no default_pool_id.there will be an error.But this judgment repeat in neutronclient and plugin。 codes in neutronclietn: if not parsed_args.loadbalancer and not parsed_args.default_pool: message = _('Either --default-pool or --loadbalancer must be ' 'specified.') raise exceptions.CommandError(message) codes in plugin: elif not lb_id: raise sharedpools.ListenerMustHaveLoadbalancer() ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1587800 Title: There is a judgment repeat in neutronclient and lbaas Status in neutron: New Bug description: In lbaasv2,when I create a listener with no loadbalancer and no default_pool_id.there will be an error.But this judgment repeat in neutronclient and plugin。 codes in neutronclietn: if not parsed_args.loadbalancer and not parsed_args.default_pool: message = _('Either --default-pool or --loadbalancer must be ' 'specified.') raise exceptions.CommandError(message) codes in plugin: elif not lb_id: raise sharedpools.ListenerMustHaveLoadbalancer() To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1587800/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1456250] Re: [UI] launch job configuration preset names are incorrect
** Changed in: sahara Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1456250 Title: [UI] launch job configuration preset names are incorrect Status in OpenStack Dashboard (Horizon): Invalid Status in Sahara: Invalid Bug description: When we launch a job by a job_template, e.g., a hive job_template, go to "Configure" tab, and click "Add" under Configuration, then click the "Name" input box, you are supposed to get a default key list. However, you get a default value list. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1456250/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587802] [NEW] libvirt resize down prevention is invalid when using rbd as backend
Public bug reported: when using ceph as backend, instance can resize to a smaller flavor, and vm_state become error in the end. nova/virt/libvirt/driver.py: @staticmethod def _is_booted_from_volume(instance, disk_mapping): """Determines whether the VM is booting from volume Determines whether the disk mapping indicates that the VM is booting from a volume. """ return ((not bool(instance.get('image_ref'))) or 'disk' not in disk_mapping) when using rbd as backend, the function cannot find 'disk' in disk_mapping, and treat it as booted_from_volume ** Affects: nova Importance: Undecided Assignee: Yang Shengming (yang-shengming) Status: New ** Changed in: nova Assignee: (unassigned) => Yang Shengming (yang-shengming) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1587802 Title: libvirt resize down prevention is invalid when using rbd as backend Status in OpenStack Compute (nova): New Bug description: when using ceph as backend, instance can resize to a smaller flavor, and vm_state become error in the end. nova/virt/libvirt/driver.py: @staticmethod def _is_booted_from_volume(instance, disk_mapping): """Determines whether the VM is booting from volume Determines whether the disk mapping indicates that the VM is booting from a volume. """ return ((not bool(instance.get('image_ref'))) or 'disk' not in disk_mapping) when using rbd as backend, the function cannot find 'disk' in disk_mapping, and treat it as booted_from_volume To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1587802/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1587777] [NEW] Mitaka: dashboard performance
Public bug reported: Environment: Openstack Mitaka on top of Leap 42.1, 1 control node, 2 compute nodes, 3-node-Ceph-cluster. Issue: Since switching to Mitaka, we're experiencing severe delays when accessing the dashboard - i.e. switching between "Compute - Overview" and "Compute - Instances" takes 15+ seconds, even after multiple invocations. Steps to reproduce: 1. Install Openstack Mitaka, incl. dashboard & navigate through the dashboard. Expected result: Browsing through the dashboard with reasonable waiting times. Actual result: Refreshing the dashboard can take up to 30 secs, switching between views (e.g. volumes to instances) takes about 15 secs in average. Additional information: I've had a look at the requests, the Apache logs and our control node's stats and noticed that it's a single call that's taking all the time... I see no indications of any error, it seems that once WSGI is invoked, that call simply takes its time. Intermediate curl requests are logged, so I see it's doing its work. Looking at "vmstat" I can see that it's user space taking all the load (Apache / mod_wsgi drives its CPU to 100%, while other CPUs are idle - and no i/o wait, no system space etc.). ---cut here--- control1:/var/log # top top - 10:51:35 up 8 days, 18:16, 2 users, load average: 2,17, 1,65, 1,48 Tasks: 383 total, 2 running, 381 sleeping, 0 stopped, 0 zombie %Cpu0 : 31,7 us, 2,9 sy, 0,0 ni, 65,0 id, 0,3 wa, 0,0 hi, 0,0 si, 0,0 st %Cpu1 : 13,1 us, 0,7 sy, 0,0 ni, 86,2 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st %Cpu2 : 17,2 us, 0,7 sy, 0,0 ni, 81,2 id, 1,0 wa, 0,0 hi, 0,0 si, 0,0 st %Cpu3 : 69,4 us, 12,6 sy, 0,0 ni, 17,9 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st %Cpu4 : 14,6 us, 1,0 sy, 0,0 ni, 84,4 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st %Cpu5 : 16,9 us, 0,7 sy, 0,0 ni, 81,7 id, 0,7 wa, 0,0 hi, 0,0 si, 0,0 st %Cpu6 : 17,3 us, 1,3 sy, 0,0 ni, 81,0 id, 0,3 wa, 0,0 hi, 0,0 si, 0,0 st %Cpu7 : 21,2 us, 1,3 sy, 0,0 ni, 77,5 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st KiB Mem: 65943260 total, 62907676 used, 3035584 free, 1708 buffers KiB Swap: 2103292 total,0 used, 2103292 free. 53438560 cached Mem PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND 6776 wwwrun20 0 565212 184504 13352 S 100,3 0,280 0:07.83 httpd-prefork 1130 root 20 0 399456 35760 22508 S 5,980 0,054 818:13.17 X 1558 sddm 20 0 922744 130440 72148 S 5,316 0,198 966:03.82 sddm-greeter 20999 nova 20 0 285888 116292 5696 S 2,658 0,176 164:27.08 nova-conductor 21030 nova 20 0 758752 182644 16512 S 2,658 0,277 58:20.40 nova-api 18757 heat 20 0 273912 73740 4612 S 2,326 0,112 50:48.72 heat-engine 18759 heat 20 0 273912 73688 4612 S 2,326 0,112 4:27.54 heat-engine 20995 nova 20 0 286236 116644 5696 S 2,326 0,177 164:38.89 nova-conductor 21027 nova 20 0 756204 180752 16980 S 2,326 0,274 58:20.09 nova-api 21029 nova 20 0 756536 180644 16496 S 2,326 0,274 139:46.29 nova-api 21031 nova 20 0 756888 180920 16512 S 2,326 0,274 58:36.37 nova-api 24771 glance20 0 2312152 139000 17360 S 2,326 0,211 24:47.83 glance-api 24772 glance20 0 631672 111248 4848 S 2,326 0,169 22:59.77 glance-api 28424 cinder20 0 720972 108536 4968 S 2,326 0,165 28:31.42 cinder-api 28758 neutron 20 0 317708 101812 4472 S 2,326 0,154 153:45.55 neutron-server # control1:/var/log # vmstat 1 procs ---memory-- ---swap-- -io -system-- --cpu- r b swpd free buff cache si sobibo in cs us sy id wa st 1 0 0 2253144 1708 5344047200 46044 11 1 88 0 0 0 0 0 2255588 1708 5344047600 0 568 3063 7627 15 1 83 0 0 1 0 0 2247596 1708 5344047600 0 144 3066 6803 14 2 83 0 0 1 0 0 2156008 1708 5344047600 072 3474 7193 25 3 72 0 0 2 0 0 2131968 1708 5344048400 0 652 3497 8565 28 2 70 0 0 3 1 0 2134000 1708 5344051200 0 14340 3629 10644 25 2 71 2 0 2 0 0 2136956 1708 5344058000 012 3483 10620 25 2 70 3 0 9 1 0 2138164 1708 5344059600 0 248 3442 9980 27 1 72 0 0 4 0 0 2105160 1708 5344062800 0 428 3617 22791 27 2 70 1 0 3 0 0 2093416 1708 5344064400 0 1216 3502 7917 25 2 72 1 0 3 0 0 2096344 1708 5344063600 072 3555 9216 25 2 73 0 0 6 0 0 2073564 1708 5344063600 072 3587 11160 28 2 70 0 0 2 0 0 2070236 1708 5344063600 0 432 3854 8160 26 4 70 0 0 1 0 0 2103628 1708 5344064000 076 3407 7492 25 3 73 0 0 3 0 0 2100320 1708 5344063600 0 1384 3383 7955 24 2 73 1 0 3 0 0 2100648
[Yahoo-eng-team] [Bug 1587780] [NEW] power_state of nova diagnostics is number instead of string
Public bug reported: taget@taget-ThinkStation-P300:~/devstack$ nova diagnostics 87c91515-acc7-4953-b0a3-f942484e986e ERROR (Conflict): Cannot 'get_diagnostics' instance 87c91515-acc7-4953-b0a3-f942484e986e while it is in power_state 4 (HTTP 409) (Request-ID: req-caaf21fc-fa11-4382-9f87-c23928b46eb1) We need to map instance.power_state to string by STATE_MAP = { NOSTATE: 'pending', RUNNING: 'running', PAUSED: 'paused', SHUTDOWN: 'shutdown', CRASHED: 'crashed', SUSPENDED: 'suspended', } ** Affects: nova Importance: Undecided Assignee: Eli Qiao (taget-9) Status: New ** Tags: api ** Changed in: nova Assignee: (unassigned) => Eli Qiao (taget-9) ** Tags added: api -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1587780 Title: power_state of nova diagnostics is number instead of string Status in OpenStack Compute (nova): New Bug description: taget@taget-ThinkStation-P300:~/devstack$ nova diagnostics 87c91515-acc7-4953-b0a3-f942484e986e ERROR (Conflict): Cannot 'get_diagnostics' instance 87c91515-acc7-4953-b0a3-f942484e986e while it is in power_state 4 (HTTP 409) (Request-ID: req-caaf21fc-fa11-4382-9f87-c23928b46eb1) We need to map instance.power_state to string by STATE_MAP = { NOSTATE: 'pending', RUNNING: 'running', PAUSED: 'paused', SHUTDOWN: 'shutdown', CRASHED: 'crashed', SUSPENDED: 'suspended', } To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1587780/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1585537] Re: Healthmonitor is not deleted from DB when its pool is deleted
Changed to Invalid due to duplication of https://bugs.launchpad.net/neutron/+bug/1571097 ** Changed in: neutron Assignee: Evgeny Fedoruk (evgenyf) => (unassigned) ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1585537 Title: Healthmonitor is not deleted from DB when its pool is deleted Status in neutron: Invalid Bug description: When LBaaS pool having a health monitor is deleted, its health monitor remains in DB. Recreate: 1. Create LB, Listener, Pool with HM. 2. Delete Pool 3. See HM is still in lbaas_healthmonitors table in DB To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1585537/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1573682] Re: [Pluggable IPAM] On request retry 'external_gateway_info' field got missed for router update case
*** This bug is a duplicate of bug 1584920 *** https://bugs.launchpad.net/bugs/1584920 ** This bug has been marked a duplicate of bug 1584920 ExternalGatewayForFloatingIPNotFound exception raised in gate-tempest-dsvm-neutron-full -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1573682 Title: [Pluggable IPAM] On request retry 'external_gateway_info' field got missed for router update case Status in neutron: In Progress Bug description: Issue observed on current master (should be reproducible on mitaka too). Steps to reproduce: 1) enable pluggable ipam be setting default value for 'ipam_driver' as 'internal' in patch set, upload into gerrit 2) validate test results for gate-tempest-dsvm-neutron-linuxbridge 3) observe that some router update related tests will fail 4) analyse screen-q-svc.txt.gz request processing to validate that after raising RetryRequest 'external_gateway_info' field is not present. During retrying update router request 'external_gateway_info' field got missed, so update does not work correctly on retry in this case. Before retry: {u'router': {u'external_gateway_info': {u'network_id': u'e0e7e739-d64f-4eb8-b4ec-bb3ef9623ab9'}, u'name': u'tempest-router--1366126092', u'admin_state_up': False}} After retry: {u'router': {u'name': u'tempest-router--1366126092', u'admin_state_up': False}} This issue is observed only on scale testing where concurent ip allocation happens for the same subnet. Due to concurent data modifications one of the transactions is restarted by RetryRequest exception (compare-and-swap synchronization). And for update router case restaring transaction leads to missing 'external_gateway_info' from original input. Related log output: 2016-04-22 12:26:35.545 18234 DEBUG neutron.api.v2.base [req-efd63a02-c382-4268-8dc9-a7708a0fc205 tempest-RoutersIpV6Test-2037938894 -] Request body: {u'router': {u'external_gateway_info': {u'network_id': u'e0e7e739-d64f-4eb8-b4ec-bb3ef9623ab9'}, u'name': u'tempest-router--1366126092', u'admin_state_up': False}} prepare_request_body /opt/stack/new/neutron/neutron/api/v2/base.py:656 ... 2016-04-22 12:26:36.873 18234 DEBUG oslo_db.api [req-efd63a02-c382-4268-8dc9-a7708a0fc205 tempest-RoutersIpV6Test-2037938894 -] Performing DB retry for function neutron.api.v2.base._update wrapper /usr/local/lib/python2.7/dist-packages/oslo_db/api.py:150 2016-04-22 12:26:36.874 18234 DEBUG neutron.api.v2.base [req-efd63a02-c382-4268-8dc9-a7708a0fc205 tempest-RoutersIpV6Test-2037938894 -] Request body: {u'router': {u'name': u'tempest-router--1366126092', u'admin_state_up': False}} prepare_request_body /opt/stack/new/neutron/neutron/api/v2/base.py:656 Full log available at [1]. Trace req-efd63a02-c382-4268-8dc9-a7708a0fc205 request processing. And [2] is failed test related to that issue. [1] http://logs.openstack.org/23/181023/71/check/gate-tempest-dsvm-neutron-linuxbridge/a475a39/logs/screen-q-svc.txt.gz#_2016-04-22_12_26_35_545 [2] http://logs.openstack.org/23/181023/71/check/gate-tempest-dsvm-neutron-linuxbridge/a475a39/console.html#_2016-04-22_12_57_48_092 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1573682/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1573683] Re: [Pluggable IPAM] On request retry 'external_gateway_info' field got missed for router update case
*** This bug is a duplicate of bug 1584920 *** https://bugs.launchpad.net/bugs/1584920 ** This bug is no longer a duplicate of bug 1573682 [Pluggable IPAM] On request retry 'external_gateway_info' field got missed for router update case ** This bug has been marked a duplicate of bug 1584920 ExternalGatewayForFloatingIPNotFound exception raised in gate-tempest-dsvm-neutron-full -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1573683 Title: [Pluggable IPAM] On request retry 'external_gateway_info' field got missed for router update case Status in neutron: New Bug description: During retrying update router request 'external_gateway_info' field got missed, so update does not work correctly on retry in this case. Before retry: {u'router': {u'external_gateway_info': {u'network_id': u'e0e7e739-d64f-4eb8-b4ec-bb3ef9623ab9'}, u'name': u'tempest-router--1366126092', u'admin_state_up': False}} After retry: {u'router': {u'name': u'tempest-router--1366126092', u'admin_state_up': False}} Related log output: 2016-04-22 12:26:35.545 18234 DEBUG neutron.api.v2.base [req-efd63a02-c382-4268-8dc9-a7708a0fc205 tempest-RoutersIpV6Test-2037938894 -] Request body: {u'router': {u'external_gateway_info': {u'network_id': u'e0e7e739-d64f-4eb8-b4ec-bb3ef9623ab9'}, u'name': u'tempest-router--1366126092', u'admin_state_up': False}} prepare_request_body /opt/stack/new/neutron/neutron/api/v2/base.py:656 ... 2016-04-22 12:26:36.873 18234 DEBUG oslo_db.api [req-efd63a02-c382-4268-8dc9-a7708a0fc205 tempest-RoutersIpV6Test-2037938894 -] Performing DB retry for function neutron.api.v2.base._update wrapper /usr/local/lib/python2.7/dist-packages/oslo_db/api.py:150 2016-04-22 12:26:36.874 18234 DEBUG neutron.api.v2.base [req-efd63a02-c382-4268-8dc9-a7708a0fc205 tempest-RoutersIpV6Test-2037938894 -] Request body: {u'router': {u'name': u'tempest-router--1366126092', u'admin_state_up': False}} prepare_request_body /opt/stack/new/neutron/neutron/api/v2/base.py:656 Full log available at [1]. Trace req-efd63a02-c382-4268-8dc9-a7708a0fc205 request processing. And [2] is failed test related to that issue. [1] http://logs.openstack.org/23/181023/71/check/gate-tempest-dsvm-neutron-linuxbridge/a475a39/logs/screen-q-svc.txt.gz#_2016-04-22_12_26_35_545 [2] http://logs.openstack.org/23/181023/71/check/gate-tempest-dsvm-neutron-linuxbridge/a475a39/console.html#_2016-04-22_12_57_48_092 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1573683/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575134] Re: Same physnet has the same mac address
** Changed in: neutron Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1575134 Title: Same physnet has the same mac address Status in neutron: Invalid Bug description: neutron net-show d973799a-c900-47ea-a369-a5610b43370c +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | bandwidth | 0| | id| d973799a-c900-47ea-a369-a5610b43370c | | mtu | 1500 | | name | test1| | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 83 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | | | tenant_id | 9c08bbe9af9c4a1ca3bbfb7f660b5909 | | vlan_transparent | | +---+--+ [root@tfg162 ~(keystone_admin)]# killall screen; killall python^C [root@tfg162 ~(keystone_admin)]# neutron port-create test1 --name bandwidth --binding:vnic-type direct --mac-address 00:01:02:03:04:05 Created a new port: +-+--+ | Field | Value| +-+--+ | admin_state_up | True | | bandwidth | 0| | binding:host_id | | | binding:profile | {} | | binding:vif_details | {} | | binding:vif_type| unbound | | binding:vnic_type | direct | | device_id | | | device_owner| | | fixed_ips | | | id | 393fcfde-21db-44b7-967c-6d741432d4ab | | mac_address | 00:01:02:03:04:05| | name| bandwidth| | network_id | d973799a-c900-47ea-a369-a5610b43370c | | status | DOWN | | tenant_id | 9c08bbe9af9c4a1ca3bbfb7f660b5909 | +-+--+ [root@tfg162 ~(keystone_admin)]# neutron port-create test2 --name bandwidth --binding:vnic-type direct --mac-address 00:01:02:03:04:05 Created a new port: +-+--+ | Field | Value| +-+--+ | admin_state_up | True | | bandwidth | 0| | binding:host_id | | | binding:profile | {} | | binding:vif_details | {} | | binding:vif_type| unbound | | binding:vnic_type | direct | | device_id | | | device_owner| | | fixed_ips | | | id | dfb28b9f-c713-4b95-b942-97c1a7ea8b7a | | mac_address | 00:01:02:03:04:05| | name| bandwidth| | network_id | 9d3f8b14-69d1-46b0-8636-a78bd912283e | | status | DOWN | | tenant_id | 9c08bbe9af9c4a1ca3bbfb7f660b5909 | +-+--+ but sriov NIC(such as intel 82599)don’t support the vf has the same mac,that will make vf can‘t rx packet even if this port is in different network。 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1575134/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-t