[Yahoo-eng-team] [Bug 1473909] [NEW] Error message during nova delete (Esxi based devstack setup using ovsvapp sollution)

2015-07-13 Thread Mh Raies
Public bug reported:

I am trying 
https://github.com/openstack/networking-vsphere/tree/master/devstack; for 
OVSvApp solution , consisting 3 DVS.
1. Trunk DVS
2. Management DVS
3. Uplink DVS

I am using Esxi based devstack setup using vCenter. Also I am working
with stable kilo.

I could successfully boot an instance using nova boot.

When I delete same instance using nova delete, API request is successful.
VM deletes after a long. But in the mean time following error occurs - 



2015-07-13 21:53:44.193 ERROR nova.network.base_api 
[req-760e73b5-9815-441d-931e-c0a57f8d32f3 None None] [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Failed storing info cache
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Traceback (most recent call last):
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/opt/stack/nova/nova/network/base_api.py, line 49, in 
update_instance_cache_with_nw_info
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] ic.save(update_cells=update_cells)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/opt/stack/nova/nova/objects/base.py, line 192, in wrapper
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] self._context, self, fn.__name__, 
args, kwargs)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/opt/stack/nova/nova/conductor/rpcapi.py, line 340, in object_action
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] objmethod=objmethod, args=args, 
kwargs=kwargs)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py, line 
156, in call
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] retry=self.retry)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py, line 90, 
in _send
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] timeout=timeout, retry=retry)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 350, in send
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] retry=retry)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 341, in _send
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] raise result
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] InstanceInfoCacheNotFound_Remote: Info 
cache for instance 06a3de55-285d-4d0d-953e-7f99aed28e95 could not be found.
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Traceback (most recent call last):
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/opt/stack/nova/nova/conductor/manager.py, line 422, in _object_dispatch
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] return getattr(target, method)(*args, 
**kwargs)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/opt/stack/nova/nova/objects/base.py, line 208, in wrapper
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] return fn(self, *args, **kwargs)
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
/opt/stack/nova/nova/objects/instance_info_cache.py, line 95, in save
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] {'network_info': nw_info_json})
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File /opt/stack/nova/nova/db/api.py, 
line 888, in 

[Yahoo-eng-team] [Bug 1007038] Re: Nova is issuing unnecessary ROLLBACK statements to MySQL

2015-07-13 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Confirmed = Invalid

** Changed in: cinder
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1007038

Title:
  Nova is issuing unnecessary ROLLBACK statements to MySQL

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.db:
  Triaged

Bug description:
  I'm not sure exactly where this is coming from yet, but Nova is
  issuing a ROLLBACK to MySQL after nearly every SELECT statement,
  even though I think the connection should be autocommit mode. This is
  unnecessary and wastes time (network roundtrip) and resources
  (database CPU cycles).

  I suspect this is getting generated by sqlalchemy when ever a
  connection is handed back to the pool, since the number of rollbacks
  roughly coincides with the number of select 1 statements that I see
  in the logs. Those are issued by the MySQLPingListener when a
  connection is taken out of the pool.

  I opened a bug already for the unnecessary select 1 statements, but
  I'm opening this as a separate bug. If someone finds a way to fix both
  at once, that'd be great.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1007038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473978] [NEW] static url path does not work consistently

2015-07-13 Thread Eric Peterson
Public bug reported:

When the static url path is something like:
/static
And the dashboard path is something like 
/horizon
or 
/dashboard

Then the old launch instance works fine and other js libraries are also
ok/fine.

When I configure horizon to use the new launch instance screen (and
disable the old), then several popup menus like the user menu and the
general navigation on the left - are broken.  Horizon is pretty much
100% useless at this point.

Does code now expect the static path to be within the dashboard path?
Where is this documented / described?

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon-core ops

** Tags added: horizon-core

** Tags added: ops

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473978

Title:
  static url path does not work consistently

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the static url path is something like:
  /static
  And the dashboard path is something like 
  /horizon
  or 
  /dashboard

  Then the old launch instance works fine and other js libraries are
  also ok/fine.

  When I configure horizon to use the new launch instance screen (and
  disable the old), then several popup menus like the user menu and the
  general navigation on the left - are broken.  Horizon is pretty much
  100% useless at this point.

  Does code now expect the static path to be within the dashboard path?
  Where is this documented / described?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1473978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472900] Re: instance boot from image(creates a new volume) deploy failed when volume rescheduling to other backends

2015-07-13 Thread Bin Zhou @ZTE
Dear jichenjc :
   I've done it, thanks.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472900

Title:
  instance boot from image(creates a new volume) deploy failed when
  volume rescheduling to other backends

Status in Cinder:
  New

Bug description:
  This bug happens in Icehouse and Kilo version of openstack.
  I launched instance on web by  boot from image(creates a new volume), 
it failed and raise Invalid volume, and I check cinder list, found the volume 
is rescheduled and created success.
  I reviewed the code of cinder volume, found that when volume create 
failed on one backends, the volume create workflow will revert, which will set 
the volume status creating and reschedule, and then set the volume status 
error, volume rescheduled to other backends and then set status 
downloading, available.
  In the process of launching instances,  nova wait the volume status in 
function _await_block_device_map_created, it returned when volume status in 
available and error, when volume rescheduling happend, it will return with 
volume in error state, and then raise Invalid volume in function 
check_attach when volume attach.
  it suggests that when volume is rescheduling, volume status will be set 
to rescheduling, without setting to error state in wokrflow revert, which 
make the volume status precise to other components.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1472900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473965] [NEW] the port of scecurity group rule for TCP or UDP should not be 0

2015-07-13 Thread shihanzhang
Public bug reported:

for TCP or UDP protocol, 0 is a reserved port, but for neutron security
group rule, if a rule with TCP protocol, and its port-range-min is 0,
the port-range-max will be invalid, because for port-range-min being 0
means that it allow all package pass, so I think it should not create a
rule with port-range-min being 0, if user want to allow all TCP/UDP
package pass, he can create a security group rule with port-range-min
and port-range-max being None.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473965

Title:
  the port of scecurity group rule for TCP or UDP should not be 0

Status in neutron:
  New

Bug description:
  for TCP or UDP protocol, 0 is a reserved port, but for neutron
  security group rule, if a rule with TCP protocol, and its port-range-
  min is 0, the port-range-max will be invalid, because for port-range-
  min being 0 means that it allow all package pass, so I think it should
  not create a rule with port-range-min being 0, if user want to allow
  all TCP/UDP package pass, he can create a security group rule with
  port-range-min and port-range-max being None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473949] [NEW] gate-nova-python34 some times test failed on test_save_updates_numa_topology

2015-07-13 Thread lyanchih
Public bug reported:

After I commit review, I got gate-nova-python34 FAILURE in following log file
http://logs.openstack.org/19/201019/1/check/gate-nova-python34/1e74b65/console.html

The assert meessages are
AssertionError: Expected call: 
instance_extra_update_by_uuid(nova.context.RequestContext object at 
0x7fb95f499dd8, 'fake-uuid', {'numa_topology': '{nova_object.version: 1.1, 
nova_object.name: InstanceNUMATopology, nova_object.changes: [cells, 
instance_uuid], nova_object.data: {cells: [{nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, id, 
cpuset], nova_object.data: {pagesize: null, cpu_pinning_raw: null, 
cpu_topology: null, id: 0, cpuset: [0], memory: 128}, 
nova_object.namespace: nova}, {nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, id, 
cpuset], nova_object.data: {pagesize: null, cpu_pinning_raw: null, 
cpu_topology: null, id: 1, cpuset: [1], memory: 128}, 
nova_object.namespace: nova}], instance_uuid: fake-uuid}, 
nova_object.namespace: nova}'})
2015-07-13 07:28:22.759 | Actual call: 
instance_extra_update_by_uuid(nova.context.RequestContext object at 
0x7fb95f499dd8, 'fake-uuid', {'numa_topology': '{nova_object.version: 1.1, 
nova_object.name: InstanceNUMATopology, nova_object.changes: [cells, 
instance_uuid], nova_object.data: {cells: [{nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, 
cpuset, id], nova_object.data: {pagesize: null, cpu_pinning_raw: 
null, cpu_topology: null, id: 0, cpuset: [0], memory: 128}, 
nova_object.namespace: nova}, {nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, 
cpuset, id], nova_object.data: {pagesize: null, cpu_pinning_raw: 
null, cpu_topology: null, id: 1, cpuset: [1], memory: 128}, 
nova_object.namespace: nova}], instance_uuid: fake-uuid}, 
nova_object.namespace: nova}'})

You can notice the difference of these two value are
nova_object.changes in cells object. They have same element with
different order.

This is because of the order of _changed_fields was not always same. Therefore 
the two value's order are different.
But python27 will not had this problem. Because of when we want to get object's 
change, those changes will been save in set and finally return it.
Python27's set collection will sort content, but python34 wouldn't.

** Affects: nova
 Importance: Undecided
 Assignee: lyanchih (lyanchih)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = lyanchih (lyanchih)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473949

Title:
  gate-nova-python34 some times test failed on
  test_save_updates_numa_topology

Status in OpenStack Compute (nova):
  New

Bug description:
  After I commit review, I got gate-nova-python34 FAILURE in following log file
  
http://logs.openstack.org/19/201019/1/check/gate-nova-python34/1e74b65/console.html

  The assert meessages are
  AssertionError: Expected call: 
instance_extra_update_by_uuid(nova.context.RequestContext object at 
0x7fb95f499dd8, 'fake-uuid', {'numa_topology': '{nova_object.version: 1.1, 
nova_object.name: InstanceNUMATopology, nova_object.changes: [cells, 
instance_uuid], nova_object.data: {cells: [{nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, id, 
cpuset], nova_object.data: {pagesize: null, cpu_pinning_raw: null, 
cpu_topology: null, id: 0, cpuset: [0], memory: 128}, 
nova_object.namespace: nova}, {nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, id, 
cpuset], nova_object.data: {pagesize: null, cpu_pinning_raw: null, 
cpu_topology: null, id: 1, cpuset: [1], memory: 128}, 
nova_object.namespace: nova}], instance_uuid: fake-uuid}, 
nova_object.namespace: nova}'})
  2015-07-13 07:28:22.759 | Actual call: 
instance_extra_update_by_uuid(nova.context.RequestContext object at 
0x7fb95f499dd8, 'fake-uuid', {'numa_topology': '{nova_object.version: 1.1, 
nova_object.name: InstanceNUMATopology, nova_object.changes: [cells, 
instance_uuid], nova_object.data: {cells: [{nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, 
cpuset, id], nova_object.data: {pagesize: null, cpu_pinning_raw: 
null, cpu_topology: null, id: 0, cpuset: [0], memory: 128}, 
nova_object.namespace: nova}, {nova_object.version: 1.2, 
nova_object.name: InstanceNUMACell, nova_object.changes: [memory, 
cpuset, id], nova_object.data: {pagesize: null, cpu_pinning_raw: 
null, cpu_topology: null, id: 1, cpuset: [1], memory: 128}, 
nova_object.namespace: nova}], instance_uuid: fake-uuid}, 
nova_object.namespace: nova}'})

  You can notice the difference of these two value are
  nova_object.changes in cells object. They have same element with
  different order.

  This is because of the order of 

[Yahoo-eng-team] [Bug 1473900] [NEW] Radware LBaaS driver UnitTests failures after mock update

2015-07-13 Thread Evgeny Fedoruk
Public bug reported:

Radware LBaaS driver UnitTests are failing after mock module update.
Fix asser_called_once() calls

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: In Progress


** Tags: lbaas radware unittest

** Changed in: neutron
 Assignee: (unassigned) = Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473900

Title:
  Radware LBaaS driver UnitTests failures after mock update

Status in neutron:
  In Progress

Bug description:
  Radware LBaaS driver UnitTests are failing after mock module update.
  Fix asser_called_once() calls

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474198] [NEW] task_state not NONE after instance boot failed

2015-07-13 Thread Eric Xie
Public bug reported:

1. Exact version of Nova:
python-novaclient-2.23.0
openstack-nova-common-2015.1.0
python-nova-2015.1.0
openstack-nova-api-2015.1.0
openstack-nova-scheduler-2015.1.0
openstack-nova-conductor-2015.1.0
openstack-nova-compute-2015.1.0
openstack-nova-2015.1.0

2. Relevant log files:
2015-07-14 11:15:07.559 19984 ERROR nova.compute.manager 
[req-8b567c49-850a-4f00-a73b-c2879528ef39 - - - - -] [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] Instance failed to spawn
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] Traceback (most recent call last):
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2565, in 
_build_resources
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] yield resources
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2437, in 
_build_and_run_instance
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] block_device_info=block_device_info)
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2385, in 
spawn
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] write_to_disk=True)
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4232, in 
_get_guest_xml
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] context)
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4103, in 
_get_guest_config
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] flavor, virt_type)
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py, line 374, in 
get_config
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] _(Unexpected vif_type=%s) % 
vif_type)
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] NovaException: Unexpected 
vif_type=binding_failed
2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] 
2015-07-14 11:15:07.565 19984 INFO nova.compute.manager 
[req-a32fae7b-2a26-4d44-ab89-e16db804a9f0 58e88aff70dd4959ba5293dab8f6ceac 
c45dae15962c4797b70f6c278a232f3c - - -] [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] Terminating instance
2015-07-14 11:15:07.572 19984 INFO nova.virt.libvirt.driver [-] [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] During wait destroy, instance disappeared.

3. Reproduce steps:
* Stop neutron-openvswitch-agent on compute node;
* Boot one instance

Expected result:
Task state of instance should be None

Actual result:
Task state of instance was always spawning
# nova list
+--++++-+--+
| ID   | Name   | 
Status | Task State | Power State | Networks |
+--++++-+--+
| f0a16736-078a-4476-a56a-abee46fdc5f5 | instance_test_vif_binding_fail | ERROR 
 | spawning   | NOSTATE |  |
+--++++-+--+

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: in-stable-kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474198

Title:
  task_state not NONE after instance boot failed

Status in OpenStack Compute (nova):
  New

Bug description:
  1. Exact version of Nova:
  python-novaclient-2.23.0
  openstack-nova-common-2015.1.0
  python-nova-2015.1.0
  openstack-nova-api-2015.1.0
  openstack-nova-scheduler-2015.1.0
  openstack-nova-conductor-2015.1.0
  openstack-nova-compute-2015.1.0
  openstack-nova-2015.1.0

  2. Relevant log files:
  2015-07-14 11:15:07.559 19984 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1293540] Re: nova should make sure the bridge exists before resuming a VM after an offline snapshot

2015-07-13 Thread Bjoern Teipel
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293540

Title:
  nova should make sure the bridge exists before resuming a VM after an
  offline snapshot

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress
Status in openstack-ansible:
  New

Bug description:
  My setup is based on icehouse-2, KVM, Neutron setup with ML2 and the linux 
bridge agent, CentOS 6.5 and LVM as the ephemeral backend.
  The OS should not matter in this, LVM should not matter either, just make 
sure the snapshot takes the VM offline.

  How to reproduce:
  1. create one VM on a compute node (make sure only one VM is present).
  2. snapshot the VM (offline).
  3. linux bridge removes the tap interface from the bridge and decides to 
remove the bridge also since there are no other interfaces present.
  4. nova tries to resume the VM and fails since no bridge is present (libvirt 
error, can't get the bridge MTU).

  Side question:
  Why do both neutron and nova deal with the bridge ?
  I can understand the need to remove empty bridges but I believe nova should 
be the one to do it if nova is dealing mainly with the bridge itself.

  More information:

  During the snapshot Neutron (linux bridge) is called:
  (neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent)
  treat_devices_removed is called and removes the tap interface and calls 
self.br_mgr.remove_empty_bridges

  On resume:
  nova/virt/libvirt/driver.py in the snapshot method fails at:
  if CONF.libvirt.virt_type != 'lxc' and not live_snapshot:
  if state == power_state.RUNNING:
  new_dom = self._create_domain(domain=virt_dom)

  Having more than one VM on the same bridge works fine since neutron
  (the linux bridge agent) only removes an empty bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445637] Re: Instance resource quota not observed for non-ephemeral storage

2015-07-13 Thread lyanchih
Cinder client had offer qos command. Those instance quota settings of
non-emphemeral disk should be set via cinder cli instead of inherit from
instance's flavor.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445637

Title:
  Instance resource quota not observed for non-ephemeral storage

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I'm using a nova built from stable/kilo and trying to implement
  instance IO resource quotas for disk as per
  https://wiki.openstack.org/wiki/InstanceResourceQuota#IO.

  While this works when building an instance from ephemeral storage, it
  does not when booting from a bootable cinder volume. I realize I can
  implement this using cinder quota but I want to apply the same
  settings in nova regardless of the underlying disk.

  Steps to produce:

  nova flavor-create iolimited 1 8192 64 4
  nova flavor-key 1 set quota:disk_read_iops_sec=1
  Boot an instance using the above flavor
  Guest XML is missing iotune entries

  Expected result:
  snip
target dev='vda' bus='virtio'/
iotune
  read_iops_sec1/read_iops_sec
/iotune
  /snip

  This relates somewhat to https://bugs.launchpad.net/nova/+bug/1405367
  but that case is purely hit when booting from RBD-backed ephemeral
  storage.

  Essentially, for non-ephemeral disks, a call is made to
  _get_volume_config() which creates a generic LibvirtConfigGuestDisk
  object but no further processing is done to add extra-specs (if any).

  I've essentially copied the disk_qos() method from the associated code
  review (https://review.openstack.org/#/c/143939/) to implement my own
  patch (attached).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474019] [NEW] test for subscriptions - ignore

2015-07-13 Thread Salvatore Orlando
Public bug reported:

you know what? meh.

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474019

Title:
  test for subscriptions - ignore

Status in neutron:
  Invalid

Bug description:
  you know what? meh.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474177] [NEW] Wrong to use ./run_test.sh --karma to test javascript code sytle check in doc

2015-07-13 Thread Jason Pan
Public bug reported:

The chapter The run_test.sh Script in Horizon doc.
It is worng to  use ./run_test.sh --karma to test code style.
It should be ./run_test.sh --eslint.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Screenshot from 2015-07-14 09:56:54.png
   
https://bugs.launchpad.net/bugs/1474177/+attachment/4428479/+files/Screenshot%20from%202015-07-14%2009%3A56%3A54.png

** Summary changed:

- Not use ./run_test.sh --karma to test javascript code sytle check in doc
+ Wrong to use ./run_test.sh --karma to test javascript code sytle check in 
doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474177

Title:
  Wrong to use ./run_test.sh --karma to test javascript code sytle
  check in doc

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The chapter The run_test.sh Script in Horizon doc.
  It is worng to  use ./run_test.sh --karma to test code style.
  It should be ./run_test.sh --eslint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474074] [NEW] PciDeviceList is not versioned properly in liberty and kilo

2015-07-13 Thread Nikola Đipanov
Public bug reported:

The following commit:

https://review.openstack.org/#/c/140289/4/nova/objects/pci_device.py

missed to bump the PciDeviceList version.

We should do it now (master @ 4bfb094) and backport this to stable Kilo
as well

** Affects: nova
 Importance: High
 Status: Confirmed

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474074

Title:
  PciDeviceList is not versioned properly in liberty and kilo

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The following commit:

  https://review.openstack.org/#/c/140289/4/nova/objects/pci_device.py

  missed to bump the PciDeviceList version.

  We should do it now (master @ 4bfb094) and backport this to stable
  Kilo as well

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460652] Re: nova-conductor infinitely reconnects to rabbit

2015-07-13 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.messaging
   Status: Fix Committed = Fix Released

** Changed in: oslo.messaging
Milestone: None = 1.17.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460652

Title:
  nova-conductor  infinitely reconnects to rabbit

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Fix Released

Bug description:
  1. Exact version of Nova 
  ii  nova-api
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - API frontend
  ii  nova-cert   
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - certificate management
  ii  nova-common 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - common files
  ii  nova-conductor  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - conductor service
  ii  nova-console
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console
  ii  nova-consoleauth
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console Authenticatorii  nova-novncproxy 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - NoVNC proxy
  ii  nova-scheduler  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - virtual machine scheduler
  ii  python-nova 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute Python libraries
  ii  python-novaclient   
1:2.17.0.74.g2598714+git201404220131~trusty-0ubuntu1 all  client 
library for OpenStack Compute API

  rabbit configuration in nova.conf:

rabbit_hosts = m610-2:5672, m610-1:5672
rabbit_ha_queues =  true

  
  2. Relevant log files:
  /var/log/nova/nova-conductor.log

   exchange 'reply_bea18a6133c548f099b85b168fddf83c' in vhost '/'
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 624, in ensure
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
return method(*args, **kwargs)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 729, in _publish
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
publisher = cls(self.conf, self.channel, topic, **kwargs)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 361, in __init__
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
type='direct', **options)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 326, in __init__
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.reconnect(channel)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 334, in reconnect
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
routing_key=self.routing_key)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 82, in __init__
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.revive(self._channel)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 216, in revive
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.declare()
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 102, in declare
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.exchange.declare()
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/entity.py, line 166, in declare
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
nowait=nowait, passive=passive,
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/amqp/channel.py, line 612, in 
exchange_declare
  2015-06-01 

[Yahoo-eng-team] [Bug 1474079] [NEW] Cross-site web socket connections fail on Origin and Host header mismatch

2015-07-13 Thread Mike Dorman
Public bug reported:

The Kilo web socket proxy implementation for Nova consoles added an
Origin header validation to ensure the Origin hostname matches the
hostname from the Host header.  This was a result of the following XSS
security bug:  https://bugs.launchpad.net/nova/+bug/1409142
(CVE-2015-0259)

In other words, this requires that the web UI being used (Horizon, or
whatever) having a URL hostname which is the same as the hostname by
which the console proxy is accessed.  This is a safe assumption for
Horizon.  However, we have a use case where our (custom) UI runs at a
different URL than does the console proxies, and thus we need to allow
cross-site web socket connections.  The patch for 1409142
(https://github.secureserver.net/cloudplatform/els-
nova/commit/fdb73a2d445971c6158a80692c6f74094fd4193a) breaks this
functionality for us.

Would like to have some way to enable controlled XSS web socket
connections to the console proxy services, maybe via a nova config
parameter providing a list of allowed origin hosts?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474079

Title:
  Cross-site web socket connections fail on Origin and Host header
  mismatch

Status in OpenStack Compute (nova):
  New

Bug description:
  The Kilo web socket proxy implementation for Nova consoles added an
  Origin header validation to ensure the Origin hostname matches the
  hostname from the Host header.  This was a result of the following XSS
  security bug:  https://bugs.launchpad.net/nova/+bug/1409142
  (CVE-2015-0259)

  In other words, this requires that the web UI being used (Horizon, or
  whatever) having a URL hostname which is the same as the hostname by
  which the console proxy is accessed.  This is a safe assumption for
  Horizon.  However, we have a use case where our (custom) UI runs at a
  different URL than does the console proxies, and thus we need to allow
  cross-site web socket connections.  The patch for 1409142
  (https://github.secureserver.net/cloudplatform/els-
  nova/commit/fdb73a2d445971c6158a80692c6f74094fd4193a) breaks this
  functionality for us.

  Would like to have some way to enable controlled XSS web socket
  connections to the console proxy services, maybe via a nova config
  parameter providing a list of allowed origin hosts?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395122] Re: ML2 Cisco Nexus MD: Fail Cfg VLAN when none exists

2015-07-13 Thread Carol Bouchard
** Project changed: neutron = networking-cisco

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1395122

Title:
  ML2 Cisco Nexus MD: Fail Cfg VLAN when none exists

Status in networking-cisco:
  Fix Released

Bug description:
  This is the fix due to a regression as a result of committing bug #
  1330597.  Bug #1330597 expected an error returned when the CLI
  'switchport trunk allowed vlan add'  is applied.  It seems though that
  not all Nexus switches will return an error.  The fix is to perform a
  'get interface' to determine if 'switchport trunk allowed vlan'
  already exists.  It it does, then use the 'add' keyword to 'switchport
  trunk allowed vlan' otherwise leave it out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1395122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472458] Re: Arista ML2 VLAN driver should ignore non-VLAN network types

2015-07-13 Thread Kyle Mestery
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472458

Title:
  Arista ML2 VLAN driver should ignore non-VLAN network types

Status in neutron:
  Fix Committed
Status in neutron kilo series:
  In Progress

Bug description:
  Arista ML2 VLAN driver should process only VLAN based networks. Any
  other network type (e.g. vxlan) should be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474069] [NEW] DeprecatedDecorators test does not setup fixtures correctly

2015-07-13 Thread Mike Bayer
Public bug reported:

this test appears to rely upon test suite setup in a different test,
outside of the test_backend_sql.py suite entirely.Below is a run of
this specific test, but you get the same error if you run all of
test_backend_sql at once as well.

[mbayer@thinkpad keystone]$ tox   -v  -e py27 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
using tox.ini: /home/mbayer/dev/jenkins_scripts/tmp/keystone/tox.ini
using tox-1.8.1 from /usr/lib/python2.7/site-packages/tox/__init__.pyc
py27 create: /home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27
  /home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox$ /usr/bin/python 
-mvirtualenv --setuptools --python /usr/bin/python2.7 py27 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-0.log
py27 installdeps: 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/requirements.txt, 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/test-requirements.txt
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/bin/pip install -U 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/requirements.txt 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/test-requirements.txt 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-1.log
py27 develop-inst: /home/mbayer/dev/jenkins_scripts/tmp/keystone
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/bin/pip install -U -e 
/home/mbayer/dev/jenkins_scripts/tmp/keystone 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-2.log
py27 runtests: PYTHONHASHSEED='3819984772'
py27 runtests: commands[0] | bash tools/pretty_tox.sh 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ /usr/bin/bash 
tools/pretty_tox.sh 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
 
running testr
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit}  --load-list /tmp/tmpclgNWA
{0} 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
 [0.245028s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File keystone/tests/unit/test_backend_sql.py, line 995, in 
test_assignment_to_resource_api
self.config_fixture.config(fatal_deprecations=True)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/fixture.py,
 line 65, in config
self.conf.set_override(k, v, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1823, in __inner
result = f(self, *args, **kwargs)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2100, in set_override
opt_info = self._get_opt_info(name, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2418, in _get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option: fatal_deprecations


Captured pythonlogging:
~~~
Adding cache-proxy 'keystone.tests.unit.test_cache.CacheIsolatingProxy' to 
backend.
registered 'sha512_crypt' handler: class 
'passlib.handlers.sha2_crypt.sha512_crypt'


==
Failed 1 tests - output below:
==

keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
-

Captured traceback:
~~~
Traceback (most recent call last):
  File keystone/tests/unit/test_backend_sql.py, line 995, in 
test_assignment_to_resource_api
self.config_fixture.config(fatal_deprecations=True)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/fixture.py,
 line 65, in config
self.conf.set_override(k, v, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1823, in __inner
result = f(self, *args, **kwargs)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2100, in set_override
opt_info = self._get_opt_info(name, group)
  File 

[Yahoo-eng-team] [Bug 1445637] Re: Instance resource quota not observed for non-ephemeral storage

2015-07-13 Thread lyanchih
I'm sorry for I was too hurry to change into invalid. 
Originally I was thought those non-ephemeral disk was managed by cinder, those 
settings should dependent on it.
And even you assign higher value, the rate was still limit by cinder. Then you 
can't observed the real rate.
But I also thought flavor was hardware template, its settings should also apply.
Maybe we could select the minimum quota value between cinder or flavor settings.

** Changed in: nova
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445637

Title:
  Instance resource quota not observed for non-ephemeral storage

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  I'm using a nova built from stable/kilo and trying to implement
  instance IO resource quotas for disk as per
  https://wiki.openstack.org/wiki/InstanceResourceQuota#IO.

  While this works when building an instance from ephemeral storage, it
  does not when booting from a bootable cinder volume. I realize I can
  implement this using cinder quota but I want to apply the same
  settings in nova regardless of the underlying disk.

  Steps to produce:

  nova flavor-create iolimited 1 8192 64 4
  nova flavor-key 1 set quota:disk_read_iops_sec=1
  Boot an instance using the above flavor
  Guest XML is missing iotune entries

  Expected result:
  snip
target dev='vda' bus='virtio'/
iotune
  read_iops_sec1/read_iops_sec
/iotune
  /snip

  This relates somewhat to https://bugs.launchpad.net/nova/+bug/1405367
  but that case is purely hit when booting from RBD-backed ephemeral
  storage.

  Essentially, for non-ephemeral disks, a call is made to
  _get_volume_config() which creates a generic LibvirtConfigGuestDisk
  object but no further processing is done to add extra-specs (if any).

  I've essentially copied the disk_qos() method from the associated code
  review (https://review.openstack.org/#/c/143939/) to implement my own
  patch (attached).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464461] Re: delete action always cause error ( in kilo)

2015-07-13 Thread Grant Murphy
** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464461

Title:
  delete action always cause error ( in kilo)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  When i did any delete actions (delete router, delete network etc...)
  in japanese environment , always get a error page.

  horizon error logs:
  -
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 52, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
89, in dispatch
  return handler(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 223, 
in post
  return self.get(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 159, 
in get
  handled = self.construct_tables()
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 150, 
in construct_tables
  handled = self.handle_table(table)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 125, 
in handle_table
  handled = self._tables[name].maybe_handle()
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1640, 
in maybe_handle
  return self.take_action(action_name, obj_id)
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1482, 
in take_action
  response = action.multiple(self, self.request, obj_ids)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
302, in multiple
  return self.handle(data_table, request, object_ids)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
828, in handle
  exceptions.handle(request, ignore=ignore)
File /usr/lib/python2.7/site-packages/horizon/exceptions.py, line 364, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
817, in handle
  (self._get_action_name(past=True), datum_display))
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 0: 
ordinal not in range(128)
  -

  It occurs in japanese,korean,chinese,french and deutsche, not occurs
  in english and spanish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470950] Re: Version of ironicclient in vivid/UCA not sufficient to interact properly with Nova ironic virt driver

2015-07-13 Thread Dmitry Tantsur
I don't think it actually affects upstream Nova

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470950

Title:
  Version of ironicclient in vivid/UCA not sufficient to interact
  properly with Nova ironic virt driver

Status in OpenStack Compute (nova):
  Invalid
Status in python-ironicclient package in Ubuntu:
  Fix Released
Status in python-ironicclient source package in Vivid:
  Fix Committed

Bug description:
  [Impact]
  Users of OpenStack Ironic are not able to boot physical machines using the 
Nova/Ironic integration; this is due to a misconfigured minimum version 
requirement upstream for OpenStack Kilo.

  [Test Case]
  See original bug report

  [Regression Potential]
  Minimal; recommendation from upstream developers to move to 0.5.1 for Kilo 
compatibility.

  [Original Bug Report]
  The nova ironic driver in Kilo was updated to use a new parameter in 
ironicclient, configdrive, which was added in v0.4.0 of python-ironicclient

  Test case

  #. Install kilo nova and kilo ironic
  #. Configure nova to use ironic virt
  #. nova boot instance
  #. Observe failure in nova-compute log showing incorrect configdrive 
parameter being used.
  #. update ironicclient to =0.4.1
  #. restart nova-compute
  #. nova boot again
  #. instance should boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473965] Re: the port of scecurity group rule for TCP or UDP should not be 0

2015-07-13 Thread Kevin Benton
I don't think we need to make an API change for this. Unless I'm
misunderstanding, it's still doing what it's supposed to when port min
is 0, right?

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473965

Title:
  the port of scecurity group rule for TCP or UDP should not be 0

Status in neutron:
  Opinion

Bug description:
  for TCP or UDP protocol, 0 is a reserved port, but for neutron
  security group rule, if a rule with TCP protocol, and its port-range-
  min is 0, the port-range-max will be invalid, because for port-range-
  min being 0 means that it allow all package pass, so I think it should
  not create a rule with port-range-min being 0, if user want to allow
  all TCP/UDP package pass, he can create a security group rule with
  port-range-min and port-range-max being None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366189] Re: mask_password doesn't handle non-ASCII characters

2015-07-13 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.concurrency
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366189

Title:
  mask_password doesn't handle non-ASCII characters

Status in Ceilometer:
  Invalid
Status in Ceilometer juno series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.concurrency:
  Invalid
Status in oslo.utils:
  Fix Released
Status in Trove:
  Fix Released

Bug description:
  When the message passed to mask_password() contains non-ASCII
  characters the line:

message = six.text_type(message)

  fails with:

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in
  position 128: ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1366189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474129] [NEW] Move out Cisco N1Kv extensions to networking-cisco

2015-07-13 Thread Saksham Varma
Public bug reported:

Cisco N1kv extensions code needs to be moved out of the core tree into
the vendor code repo -- openstack/networking-cisco. This needs to be
done as part of the second phase of vendor-core code decomposition.

** Affects: networking-cisco
 Importance: Undecided
 Assignee: Saksham Varma (sakvarma)
 Status: New

** Affects: neutron
 Importance: Undecided
 Assignee: Saksham Varma (sakvarma)
 Status: In Progress


** Tags: n1kv

** Changed in: neutron
 Assignee: (unassigned) = Saksham Varma (sakvarma)

** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** Changed in: networking-cisco
 Assignee: (unassigned) = Saksham Varma (sakvarma)

** Summary changed:

- Move out Cisco N1Kv extensions
+ Move out Cisco N1Kv extensions to networking-cisco

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474129

Title:
  Move out Cisco N1Kv extensions to networking-cisco

Status in networking-cisco:
  New
Status in neutron:
  In Progress

Bug description:
  Cisco N1kv extensions code needs to be moved out of the core tree into
  the vendor code repo -- openstack/networking-cisco. This needs to be
  done as part of the second phase of vendor-core code decomposition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1474129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266513] Re: Some Python requirements are not hosted on PyPI

2015-07-13 Thread John Dickinson
** Changed in: swift
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1266513

Title:
  Some Python requirements are not hosted on PyPI

Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone havana series:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  Pip 1.5 (released January 2nd, 2014) will by default refuse to
  download packages which are linked from PyPI but not hosted on
  pypi.python.org. The workaround is to whitelist these package names
  individually with both the --allow-external and --allow-insecure
  options.

  These options are new in pip 1.4, so encoding them will break for
  people trying to use pip 1.3.x or earlier. Those earlier versions of
  pip are not secure anyway since they don't connect via HTTPS with host
  certificate validation, so we should be encouraging people to use 1.4
  and later anyway.

  The --allow-insecure option is transitioning to a clearer --allow-
  unverified option name starting with 1.5, but the new form does not
  work with pip before 1.5 so we should use the old version for now to
  allow people to transition gracefully. The --allow-insecure form won't
  be removed until at least pip 1.7 according to comments in the source
  code.

  Virtualenv 1.11 (released the same day) bundles pip 1.5 by default,
  and so requires these workarounds when using requirements external to
  PyPI. Be aware that 1.11 is broken for projects using
  sitepackages=True in their tox.ini. The fix is
  https://github.com/pypa/virtualenv/commit/a6ca6f4 which is slated to
  appear in 1.11.1 (no ETA available). We've worked around it on our
  test infrastructure with https://git.openstack.org/cgit/openstack-
  infra/config/commit/?id=20cd18a for now, but that is hiding the
  external-packages issue since we're currently running all tests with
  pip 1.4.1 as a result.

  This bug will also be invisible in our test infrastructure for
  projects listed as having the PyPI mirror enforced in
  openstack/requirements (except for jobs which bypass the mirror, such
  as those for requirements changes), since our update jobs will pull in
  and mirror external packages and pip sees the mirror as being PyPI
  itself in that situation.

  We'll use this bug to track necessary whitelist updates to tox.ini and
  test scripts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1266513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473588] Re: Provide an option to disable auto-hashing of keystone token

2015-07-13 Thread Lin Hua Cheng
** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473588

Title:
  Provide an option to disable auto-hashing of keystone token

Status in django-openstack-auth:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  Token hashing is performed to be able to support session with cookie
  backend. However, the hashed token doesn't always work.

  We should provide an option for user to turn off token hashing

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1473588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474162] [NEW] ldap unicode issue when doing a show user

2015-07-13 Thread Sam Leong
Public bug reported:

In stable/kilo release, when the username contains non ascii charaters, showing 
the user from ldap with the following command -
openstack user show --domain=ad Test Accent Communiquè
will throw an exception. And this has been addressed in the Master branch, so 
what needs to be done is just to backport the changes to stable/kilo. 

I tested the changes in the Master branch and works fine.

This is similar to https://bugs.launchpad.net/keystone/+bug/1419187



(keystone.common.wsgi): 2015-07-10 21:25:26,351 INFO wsgi __call__ GET 
/domains?name=ad
(keystone.common.wsgi): 2015-07-10 21:25:26,385 ERROR wsgi __call__ 'ascii' 
codec can't encode character u'\xe8' in position 21: ordinal not in range(128)
Traceback (most recent call last):
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/wsgi.py,
 line 452, in __call__
response = request.get_response(self.application)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/request.py,
 line 1317, in send
application, catch_exc_info=False)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/request.py,
 line 1281, in call_application
app_iter = application(self.environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
response = self.app(environ, start_response)
 File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 

[Yahoo-eng-team] [Bug 1473859] [NEW] too much extended port_dict is annoying

2015-07-13 Thread yong sheng gong
Public bug reported:


2015-07-13 14:35:58.525 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:58.526 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:58.542 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:35:59.355 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.369 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:35:59.389 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.389 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.413 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.429 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:35:59.446 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.446 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.468 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.481 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:35:59.497 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:35:59.497 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:01.637 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:01.638 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:01.638 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:02.637 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:02.638 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:02.652 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:36:02.869 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:02.870 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:02.884 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:36:03.249 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended subnet dict for 
driver 'port_security'
2015-07-13 14:36:03.250 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended subnet dict for 
driver 'port_security'
2015-07-13 14:36:03.250 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended subnet dict for 
driver 'port_security'
2015-07-13 14:36:03.250 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended subnet dict for 
driver 'port_security'
2015-07-13 14:36:03.303 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:36:03.303 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:36:03.350 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended port dict for 
driver 'port_security'
2015-07-13 

[Yahoo-eng-team] [Bug 1473848] [NEW] create_region_with_id miss the schema validation

2015-07-13 Thread Dave Chen
Public bug reported:

When I am debugging the code for the proposed fixing of bug #1468597,
#1466872 [1], I found some testcases failure, they are
`test_create_region_with_duplicate_id`,
`test_create_region_with_matching_ids` and `test_create_region_with_id`.

After digging into the code, I found when the region is created with id
provided is not given schema validation at all, this is beasue  region
reference data is not passed as dict, see [2], but validator expect the
reference data is in kwargs [3] which is common usage when it's resource
is request with restful API.

So, the region creation with id given missed the schema validation.


[1] https://review.openstack.org/#/c/195903/
[2] 
https://github.com/openstack/keystone/blob/master/keystone/catalog/controllers.py#L173
[3] 
https://github.com/openstack/keystone/blob/master/keystone/common/validation/__init__.py#L34-L35

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473848

Title:
  create_region_with_id miss the schema validation

Status in Keystone:
  New

Bug description:
  When I am debugging the code for the proposed fixing of bug #1468597,
  #1466872 [1], I found some testcases failure, they are
  `test_create_region_with_duplicate_id`,
  `test_create_region_with_matching_ids` and
  `test_create_region_with_id`.

  After digging into the code, I found when the region is created with
  id provided is not given schema validation at all, this is beasue
  region reference data is not passed as dict, see [2], but validator
  expect the reference data is in kwargs [3] which is common usage when
  it's resource is request with restful API.

  So, the region creation with id given missed the schema validation.


  [1] https://review.openstack.org/#/c/195903/
  [2] 
https://github.com/openstack/keystone/blob/master/keystone/catalog/controllers.py#L173
  [3] 
https://github.com/openstack/keystone/blob/master/keystone/common/validation/__init__.py#L34-L35

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473820] Re: Get Nova Release info via Restful

2015-07-13 Thread Markus Zoeller
@Henry (zhiqzhao):

As you already noted, this is a feature request. Feature requests for
nova are done with blueprints [1] (you've already done this) and with
specs [2]. I'll recommend to read [3] if not yet done. To focus here 
on bugs which are a failures/errors/faults I close this one as Invalid.
The effort to implement the requested feature is then driven only by
the blueprint (and spec).

If there are any questions left, feel free to contact me (markus_z) 
in the IRC channel #openstack-nova 

[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473820

Title:
  Get Nova Release info via Restful

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  OpenStack releases are numbered using a .N time-based scheme
  according to which we can know the version of Openstack. E.g. Ice
  house, Kilo etc.

  And we have command to get the release number:
  [root@crdc-c210-223 ~]# nova-manage --version
  2014.1.4

  Sometimes customer may want to know the version of remote openstack
  and check APIs accordingly. We want to provide this kind of restful
  api to tell them the release version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1473820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp