[Yahoo-eng-team] [Bug 1505557] [NEW] L3 agent not always properly update floatingip status on server

2015-10-13 Thread Oleg Bondarev
Public bug reported:

commit c44506bfd60b2dd6036e113464f1ea682cfaeb6c introduced an
optimization to not send floating ip status update when status didn't
change: so if server returned floating ip as ACTIVE we don't need to
update it's status after successfull processing.

This migh be wrong in DVR case: when floatingip's associated fixed port is 
moved from one host to another, the notification is sent to both l3 agents on 
compute nodes (old and new). Here is what happens next:
 - old agent receives notification and requests router info from server
 - same for new agent
 - server returns router info without floating ip to old agent
 - server returns router info with floating ip to new agent. The status of 
floating ip is ACTIVE.
 - old agent removes floating ip and sends status update so server puts 
floatingip to DOWN state
 - new agent adds floatingip and doesn't send status update since it didn't 
changed from agent's point of view
 - floating ip stays in DOWN state though it's actually active

The fix would be to always update status of floating ip if agent
actually applies it.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505557

Title:
  L3 agent not always properly update floatingip status on server

Status in neutron:
  New

Bug description:
  commit c44506bfd60b2dd6036e113464f1ea682cfaeb6c introduced an
  optimization to not send floating ip status update when status didn't
  change: so if server returned floating ip as ACTIVE we don't need to
  update it's status after successfull processing.

  This migh be wrong in DVR case: when floatingip's associated fixed port is 
moved from one host to another, the notification is sent to both l3 agents on 
compute nodes (old and new). Here is what happens next:
   - old agent receives notification and requests router info from server
   - same for new agent
   - server returns router info without floating ip to old agent
   - server returns router info with floating ip to new agent. The status of 
floating ip is ACTIVE.
   - old agent removes floating ip and sends status update so server puts 
floatingip to DOWN state
   - new agent adds floatingip and doesn't send status update since it didn't 
changed from agent's point of view
   - floating ip stays in DOWN state though it's actually active

  The fix would be to always update status of floating ip if agent
  actually applies it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505571] [NEW] VM delete operation fails with 'Connection to neutron failed - Read timeout' error

2015-10-13 Thread Sonu
Public bug reported:

Problem description:
With series of VM delete operation in openstack (4000 vms) with KVM compute 
nodes,  the VM instance goes into ERROR state.
The error shown in Horizon UI is 
"ConnectionFailed: Connection to neutron failed: 
HTTPConnectionPool(host='192.168.0.1', port=9696): Read timed out. (read 
timeout=30)"

This happens because neutron takes more than 30 secs (actually around 80 secs) 
to delete one port, and nova sets the instance into ERROR state 'cz the default 
timeout of all neutron api(s) is set to 30 sec in nova.
This can be worked around, by increasing the timeout to 120 in nova.conf. But 
this cannot be recommended as the solution.

cat /etc/nova/nova.conf | grep url_timeout
url_timeout = 120

** Affects: neutron
 Importance: Undecided
 Assignee: Sonu (sonu-sudhakaran)
 Status: New


** Tags: delete read timeout

** Changed in: neutron
 Assignee: (unassigned) => Sonu (sonu-sudhakaran)

** Tags added: delete

** Tags added: read timeout

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505571

Title:
  VM delete operation fails with 'Connection to neutron failed - Read
  timeout' error

Status in neutron:
  New

Bug description:
  Problem description:
  With series of VM delete operation in openstack (4000 vms) with KVM compute 
nodes,  the VM instance goes into ERROR state.
  The error shown in Horizon UI is 
  "ConnectionFailed: Connection to neutron failed: 
HTTPConnectionPool(host='192.168.0.1', port=9696): Read timed out. (read 
timeout=30)"

  This happens because neutron takes more than 30 secs (actually around 80 
secs) to delete one port, and nova sets the instance into ERROR state 'cz the 
default timeout of all neutron api(s) is set to 30 sec in nova.
  This can be worked around, by increasing the timeout to 120 in nova.conf. But 
this cannot be recommended as the solution.

  cat /etc/nova/nova.conf | grep url_timeout
  url_timeout = 120

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505575] [NEW] Serious memory consumption by neutron-server with DVR at scale

2015-10-13 Thread Oleg Bondarev
Public bug reported:

Steps to reproduce:
0. The issue is noticeable at scale (100+ nodes), DVR should be turned on
1. Run rally scenario NeutronNetworks.create_and_list_routers

Initially neutron-server processes consume 100-150M, but at some point
the size rapidly increases in several times. (At 200 nodes the raise was
from 150M to 2G, and upto 14G in the end).

The issue may lead to OOM situation causing kernel to kill the process
with highest consumption. Usually candidates are rabbit or mysql. This
makes cluster completely inoperable.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: scale

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505575

Title:
  Serious memory consumption by neutron-server with DVR at scale

Status in neutron:
  New

Bug description:
  Steps to reproduce:
  0. The issue is noticeable at scale (100+ nodes), DVR should be turned on
  1. Run rally scenario NeutronNetworks.create_and_list_routers

  Initially neutron-server processes consume 100-150M, but at some point
  the size rapidly increases in several times. (At 200 nodes the raise
  was from 150M to 2G, and upto 14G in the end).

  The issue may lead to OOM situation causing kernel to kill the process
  with highest consumption. Usually candidates are rabbit or mysql. This
  makes cluster completely inoperable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489442] Re: Invalid order of volumes with adding a volume in boot operation

2015-10-13 Thread Feodor Tersin
Well,  if all you guys say this is not a bug, it's not a bug. But it's
sadly that Nova cannot guarantee device names even in this simple case.

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: Feodor Tersin (ftersin) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489442

Title:
  Invalid order of volumes with adding a volume in boot operation

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  If an image has several volume in bdm, and a user adds one more volume
  for boot operation, then the new volume is not just added to a volume
  list, but becomes the second device. This can lead to problems if the
  image root device has various soft which settings point to other
  volumes.

  For example:
  1 the image is a snapshot of a volume backed instance which had vda and vdb 
volumes
  2 the instance had an sql server, which used both vda and vdb for its database
  3 if a user runs a new instance from the image, either device names are 
restored (with xen), or they're reassigned (libvirt) to the same names, because 
the order of devices, which are passed in libvirt, is the same as it was for 
the original instance
  4 if a user runs a new instance, adding a new volume, the volume list becomes 
vda, new, vdb
  5 in this case libvirt reassings device names to vda=vda, new=vdb, vdb=vdc
  6 as a result the sql server will not find its data on vdb

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505476] Re: when live-migrate failed, remove_volume_connection function accept incorrect arguments order in kilo

2015-10-13 Thread Markus Zoeller (markus_z)
@jingtao liang:

Because the RPC API calls the "remove_volume_connection" method of 
the manager with named arguments, the order of the arguments is not 
important. The flow is this:

nova/compute/manager.py (on the source host)

def _rollback_live_migration(self, context, instance,
 dest, block_migration, 
 migrate_data=None):
# [...]
self.compute_rpcapi.remove_volume_connection(
context, instance, bdm.volume_id, dest)

nova/compute/rpcapi.py (on the source host)

def remove_volume_connection(self, ctxt, instance, volume_id, host):
version = '4.0'
cctxt = self.client.prepare(server=host, version=version)
return cctxt.call(ctxt, 'remove_volume_connection',
  instance=instance, volume_id=volume_id)

nova/compute/manager.py (on the target host)

def remove_volume_connection(self, context, volume_id, instance):
# [...]

IOW, it would be an issue if this call:

return cctxt.call(ctxt, 'remove_volume_connection',
  instance=instance, volume_id=volume_id)

would look like this:

return cctxt.call(ctxt, 'remove_volume_connection',
  instance, volume_id)

but it does not, which means we are fine.

Because of this, I'll put the status of this bug to "Invalid". If you
disagree with it, please set it back to "new" and add a reasoning why
you think this is a valid failure in the behavior of Nova.

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: Nimish Joshi (jnimish77) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505476

Title:
  when live-migrate failed,remove_volume_connection function  accept
  incorrect arguments order  in kilo

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Openstack Version : kilo 2015.1.0

  Reproduce steps:

  please see the paths of codes:openstack/nova/nova/compute/manager.py

  def _rollback_live_migration(self, context, instance,dest,
  block_migration, migrate_data=None):

  ..
  for bdm in bdms:
  if bdm.is_volume:
  self.compute_rpcapi.remove_volume_connection(
  context, instance, bdm.volume_id, dest)
  ..
   
  Actual result:

  def remove_volume_connection(self, context, volume_id, instance):
  ..
  ..

  Expected result:

  def remove_volume_connection(self, context, instance, volume_id):

  
  pelease check this bug , thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505374] Re: Unit tests failing with oslo.policy 0.12.0

2015-10-13 Thread Valeriy Ponomaryov
** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
Milestone: None => mitaka-1

** Changed in: manila
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1505374

Title:
  Unit tests failing with oslo.policy 0.12.0

Status in Keystone:
  In Progress
Status in Manila:
  In Progress

Bug description:
  
  oslo.policy 0.12.0 was released recently, and this caused a couple keystone 
unit tests to fail. The new release has a change to use requests rather than 
urllib, and keystone's unit tests were assuming that oslo.policy was 
implemented using urllib (by mocking the response).

  failing tests:

   keystone.tests.unit.test_policy.PolicyTestCase.test_enforce_http_true
   keystone.tests.unit.test_policy.PolicyTestCase.test_enforce_http_false

  Keystone doesn't need to test these internal implementation details of
  oslo.policy, let's just assume it works as designed and they have
  their own tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505628] [NEW] versions directory containing migration script is not included in neutron build if building outside of git

2015-10-13 Thread Jakub Libosvar
Public bug reported:

distutils contains only python files
https://docs.python.org/2/distutils/sourcedist.html#manifest

neutron/db/migration/alembic_migrations/versions directory and its
subdirectories don't have __init__.py because it's not an actual python
package. When using "python setup.py egg_info", the SOURCE.txt file is
generated by pbr based on git data. If package is built from e.g.
tarball then pbr doesn't have access to git data and the whole versions
directory is not packaged.

** Affects: neutron
 Importance: Low
 Assignee: Jakub Libosvar (libosvar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505628

Title:
  versions directory containing migration script is not included in
  neutron build if building outside of git

Status in neutron:
  In Progress

Bug description:
  distutils contains only python files
  https://docs.python.org/2/distutils/sourcedist.html#manifest

  neutron/db/migration/alembic_migrations/versions directory and its
  subdirectories don't have __init__.py because it's not an actual
  python package. When using "python setup.py egg_info", the SOURCE.txt
  file is generated by pbr based on git data. If package is built from
  e.g. tarball then pbr doesn't have access to git data and the whole
  versions directory is not packaged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505627] [NEW] QoS ECN Support

2015-10-13 Thread vikram.choudhary
Public bug reported:

[Existing problem]
Network congestion can be very common in large data centers generating huge 
traffic from multiple hosts. Though each hosts can use IP header TOS ECN bit 
functionality to implement explicit congestion notification [1]_ but this will 
be a redundant effort.

[Proposal]
This proposal talks about achieving ECN on behalf of each host. This will help 
in making the solution centralized and can be done per tenant level. In 
addition to this traffic classification for applying ECN functionality can also 
be achieved via specific filtering rules, if required. Almost all the leading 
vendors support this option for better QoS [2]_.

Existing QoS framework is limited only to bandwidth rate limiting and be
extend for supporting explicit congestion notification (RFC 3168 [3]_).

[Benefits]
- Enhancement to the existing QoS functionality.

[What is the enhancement?]
- Add ECN support to the QoS extension.
- Add additional command lines for realizing ECN functionality.
- Add OVS support.

[Related information]
[1] ECN Wiki
   http://en.wikipedia.org/wiki/Explicit_Congestion_Notification
[2] QoS
   https://review.openstack.org/#/c/88599/
[3] RFC 3168
   https://tools.ietf.org/html/rfc3168
[4] Specification

https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification

** Affects: neutron
 Importance: Undecided
 Assignee: vikram.choudhary (vikschw)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => vikram.choudhary (vikschw)

** Summary changed:

- qos-ecn-support
+ QoS ECN Support

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505627

Title:
  QoS ECN Support

Status in neutron:
  New

Bug description:
  [Existing problem]
  Network congestion can be very common in large data centers generating huge 
traffic from multiple hosts. Though each hosts can use IP header TOS ECN bit 
functionality to implement explicit congestion notification [1]_ but this will 
be a redundant effort.

  [Proposal]
  This proposal talks about achieving ECN on behalf of each host. This will 
help in making the solution centralized and can be done per tenant level. In 
addition to this traffic classification for applying ECN functionality can also 
be achieved via specific filtering rules, if required. Almost all the leading 
vendors support this option for better QoS [2]_.

  Existing QoS framework is limited only to bandwidth rate limiting and
  be extend for supporting explicit congestion notification (RFC 3168
  [3]_).

  [Benefits]
  - Enhancement to the existing QoS functionality.

  [What is the enhancement?]
  - Add ECN support to the QoS extension.
  - Add additional command lines for realizing ECN functionality.
  - Add OVS support.

  [Related information]
  [1] ECN Wiki
 http://en.wikipedia.org/wiki/Explicit_Congestion_Notification
  [2] QoS
 https://review.openstack.org/#/c/88599/
  [3] RFC 3168
 https://tools.ietf.org/html/rfc3168
  [4] Specification
  
https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505631] [NEW] QoS VLAN 802.1p Support

2015-10-13 Thread vikram.choudhary
Public bug reported:

[Overview]
The IEEE 802.1p signaling standard defines traffic prioritization at Layer 2 of 
the OSI model. Layer 2 network devices, such as switches, that adhere to this 
standard can group incoming packets into separate traffic classes.The 802.1p 
standard is used to prioritize packets as they traverse a network segment 
(subnet).When a subnet becomes congested, causing a Layer 2 network device to 
drop packets, the packets marked for higher priority receive preferential 
treatment and are serviced before packets with lower priorities.

The 802.1p priority markings for a packet are appended to the MAC
header.On Ethernet networks, 802.1p priority markings are carried in
Virtual Local Area Network (VLAN) tags. The IEEE 802.1q standard defines
VLANs and VLAN tags. This standard specifies a 3-bit field for priority
in the VLAN tag, but it does not define the values for the field. The
802.1p standard defines the values for the priority field. This standard
defines eight priority classes (0 - 7). Network administrators can
determine the actual mappings, but the standard makes general
recommendations. The VLAN tag is placed inside the Ethernet header,
between the source address and either the Length field (for an IEEE
802.3 frame) or the EtherType field (for an Ethernet II frame) in the
MAC header. The 802.1p marking determines the service level that a
packet receives when it crosses an 802.1p-enabled network segment.

[Proposal]
Existing QoS [1]_ framework can be extend for supporting VLAN priority.

[Benefits]
- Enhancement to the existing QoS functionality.

[What is the enhancement?]
- Add VLAN tagging support to the QoS extension.
- Add additional command lines for realizing ECN functionality.
- Add OVS support.

[Related information]
[1] QoS
   https://review.openstack.org/#/c/88599/
[2] Specification
https://blueprints.launchpad.net/neutron/+spec/vlan-802.1p-qos

** Affects: neutron
 Importance: Undecided
 Assignee: vikram.choudhary (vikschw)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => vikram.choudhary (vikschw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505631

Title:
  QoS VLAN 802.1p Support

Status in neutron:
  New

Bug description:
  [Overview]
  The IEEE 802.1p signaling standard defines traffic prioritization at Layer 2 
of the OSI model. Layer 2 network devices, such as switches, that adhere to 
this standard can group incoming packets into separate traffic classes.The 
802.1p standard is used to prioritize packets as they traverse a network 
segment (subnet).When a subnet becomes congested, causing a Layer 2 network 
device to drop packets, the packets marked for higher priority receive 
preferential treatment and are serviced before packets with lower priorities.

  The 802.1p priority markings for a packet are appended to the MAC
  header.On Ethernet networks, 802.1p priority markings are carried in
  Virtual Local Area Network (VLAN) tags. The IEEE 802.1q standard
  defines VLANs and VLAN tags. This standard specifies a 3-bit field for
  priority in the VLAN tag, but it does not define the values for the
  field. The 802.1p standard defines the values for the priority field.
  This standard defines eight priority classes (0 - 7). Network
  administrators can determine the actual mappings, but the standard
  makes general recommendations. The VLAN tag is placed inside the
  Ethernet header, between the source address and either the Length
  field (for an IEEE 802.3 frame) or the EtherType field (for an
  Ethernet II frame) in the MAC header. The 802.1p marking determines
  the service level that a packet receives when it crosses an 802.1p-
  enabled network segment.

  [Proposal]
  Existing QoS [1]_ framework can be extend for supporting VLAN priority.

  [Benefits]
  - Enhancement to the existing QoS functionality.

  [What is the enhancement?]
  - Add VLAN tagging support to the QoS extension.
  - Add additional command lines for realizing ECN functionality.
  - Add OVS support.

  [Related information]
  [1] QoS
 https://review.openstack.org/#/c/88599/
  [2] Specification
  https://blueprints.launchpad.net/neutron/+spec/vlan-802.1p-qos

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505645] [NEW] neutron/tests/functional/test_server.py does not work for oslo.service < 0.10.0

2015-10-13 Thread Ihar Hrachyshka
Public bug reported:

Since https://review.openstack.org/#/c/233893/ was merged, the test
fails if oslo.service < 0.10.0 is installed, as follows:

Traceback (most recent call last):
  File "neutron/tests/functional/test_server.py", line 286, in test_start
workers=len(workers))
  File "neutron/tests/functional/test_server.py", line 151, in 
_test_restart_service_on_sighup
'size': expected_size}))
  File "neutron/agent/linux/utils.py", line 346, in wait_until_true
eventlet.sleep(sleep)
  File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 34, 
in sleep
hub.switch()
  File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, 
in switch
return self.greenlet.switch()
RuntimeError: Timed out waiting for file 
/tmp/tmp517j7P/tmpQwQXvn/test_server.tmp to be created and its size become 
equal to 5.


neutron.tests.functional.test_server.TestRPCServer.test_restart_rpc_on_sighup_multiple_workers
--

Captured pythonlogging:
~~~
2015-10-13 13:28:55,848  WARNING [oslo_config.cfg] Option "verbose" from 
group "DEFAULT" is deprecated for removal.  Its value may be silently ignored 
in the future.


Captured traceback:
~~~
Traceback (most recent call last):
  File "neutron/tests/functional/test_server.py", line 248, in 
test_restart_rpc_on_sighup_multiple_workers
workers=2)
  File "neutron/tests/functional/test_server.py", line 151, in 
_test_restart_service_on_sighup
'size': expected_size}))
  File "neutron/agent/linux/utils.py", line 346, in wait_until_true
eventlet.sleep(sleep)
  File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 34, 
in sleep
hub.switch()
  File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, 
in switch
return self.greenlet.switch()
RuntimeError: Timed out waiting for file 
/tmp/tmpoW1HXA/tmpBDI82O/test_server.tmp to be created and its size become 
equal to 5.


neutron.tests.functional.test_server.TestWsgiServer.test_restart_wsgi_on_sighup_multiple_workers


Captured traceback:
~~~
Traceback (most recent call last):
  File "neutron/tests/functional/test_server.py", line 211, in 
test_restart_wsgi_on_sighup_multiple_workers
workers=2)
  File "neutron/tests/functional/test_server.py", line 151, in 
_test_restart_service_on_sighup
'size': expected_size}))
  File "neutron/agent/linux/utils.py", line 346, in wait_until_true
eventlet.sleep(sleep)
  File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 34, 
in sleep
hub.switch()
  File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, 
in switch
return self.greenlet.switch()
RuntimeError: Timed out waiting for file 
/tmp/tmpwKs2ON/tmp6VhW3q/test_server.tmp to be created and its size become 
equal to 5.

Note that minimal oslo.service version in master is 0.7.0. There is a
patch to bump the version in openstack/requirements:
https://review.openstack.org/#/c/234026/

Anyway, we still need to fix the test to work with any version of
oslo.service, because the fix is needed for Liberty branch where we
cannot bump the version of the library, as we can do in master.

** Affects: neutron
 Importance: High
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: functional-tests gate-failure oslo

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: functional-tests gate-failure

** Tags added: oslo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505645

Title:
  neutron/tests/functional/test_server.py does not work for oslo.service
  < 0.10.0

Status in neutron:
  Confirmed

Bug description:
  Since https://review.openstack.org/#/c/233893/ was merged, the test
  fails if oslo.service < 0.10.0 is installed, as follows:

  Traceback (most recent call last):
File "neutron/tests/functional/test_server.py", line 286, in test_start
  workers=len(workers))
File "neutron/tests/functional/test_server.py", line 151, in 
_test_restart_service_on_sighup
  'size': expected_size}))
File "neutron/agent/linux/utils.py", line 346, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 
34, in sleep
  hub.switch()
File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, 
in switch
  return self.greenlet.switch()
  RuntimeError: Timed out waiting for file 
/t

[Yahoo-eng-team] [Bug 1505661] [NEW] RetryRequest failure on create_security_group_bulk

2015-10-13 Thread Oleg Bondarev
Public bug reported:

<163>Oct  5 09:10:29 node-203 neutron-server 2015-10-05 09:10:29.831 34082 
ERROR neutron.api.v2.resource [req-ea0e5480-e8ec-4014-9015-2199424f54bc ] 
create failed
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 131, in wrapper
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 448, in create
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource objs = 
obj_creator(request.context, body, **kwargs)
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 123, 
in create_security_group_bulk
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource 
security_group_rule)
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 954, 
in _create_bulk
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource {'resource': 
resource, 'item': item})
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 947, 
in _create_bulk
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource 
objects.append(obj_creator(context, item))
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 150, 
in create_security_group
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource 
self._ensure_default_security_group(context, tenant_id)
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 663, 
in _ensure_default_security_group
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource raise 
db_exc.RetryRequest(ex)
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource RetryRequest
2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource

<167>Oct  5 09:10:29 node-203 neutron-server 2015-10-05 09:10:29.820 34082 
DEBUG neutron.db.securitygroups_db [req-ea0e5480-e8ec-4014-9015-2199424f54bc ] 
Duplicate default security group 9839de92fb8049598f1c3ea8f32b9cf9 was not 
created _
ensure_default_security_group 
/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py:679
<163>Oct  5 09:10:29 node-203 neutron-server 2015-10-05 09:10:29.831 34082 
ERROR neutron.db.db_base_plugin_v2 [req-ea0e5480-e8ec-4014-9015-2199424f54bc ] 
An exception occurred while creating the security_group:{u'security_group': 
{'tenan
t_id': u'9839de92fb8049598f1c3ea8f32b9cf9', u'name': 
u'rally_neutronsecgrp_F44SF1uvTciIQJlu', u'description': u'Rally SG'}}

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505661

Title:
  RetryRequest failure on create_security_group_bulk

Status in neutron:
  New

Bug description:
  <163>Oct  5 09:10:29 node-203 neutron-server 2015-10-05 09:10:29.831 34082 
ERROR neutron.api.v2.resource [req-ea0e5480-e8ec-4014-9015-2199424f54bc ] 
create failed
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 131, in wrapper
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource return 
f(*args, **kwargs)
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 448, in create
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource objs = 
obj_creator(request.context, body, **kwargs)
  2015-10-05 09:10:29.831 34082 TRACE neutron.api.v2.resource   File 
"/usr/lib/pyt

[Yahoo-eng-team] [Bug 1505675] [NEW] Flaky tasks test glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time

2015-10-13 Thread Erno Kuvaja
Public bug reported:

We get constantly failures like:
2015-10-13 06:59:05.343 | 
==
2015-10-13 06:59:05.343 | FAIL: 
glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time
2015-10-13 06:59:05.344 | tags: worker-7
2015-10-13 06:59:05.344 | 
--
2015-10-13 06:59:05.344 | Traceback (most recent call last):
2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
2015-10-13 06:59:05.344 | return func(*args, **keywargs)
2015-10-13 06:59:05.344 |   File "glance/tests/unit/v2/test_tasks_resource.py", 
line 365, in test_create_with_live_time
2015-10-13 06:59:05.344 | self.assertEqual(CONF.task.task_time_to_live, 
task_live_time_hour)
2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
2015-10-13 06:59:05.344 | self.assertThat(observed, matcher, message)
2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
2015-10-13 06:59:05.345 | raise mismatch_error
2015-10-13 06:59:05.345 | testtools.matchers._impl.MismatchError: 48 != 47
2015-10-13 06:59:05.345 | Ran 2855 tests in 298.484s
2015-10-13 06:59:05.345 | FAILED (id=0, failures=1, skips=2)
2015-10-13 06:59:05.345 | error: testr failed (1)
2015-10-13 06:59:05.394 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-glance-python27/.tox/py27/bin/lockutils-wrapper 
python setup.py testr --slowest --testr-args='

This is caused if the second of timestamp changes between created_at and
updated_at gets created.

** Affects: glance
 Importance: High
 Assignee: Erno Kuvaja (jokke)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Erno Kuvaja (jokke)

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505675

Title:
  Flaky tasks test
  
glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time

Status in Glance:
  New

Bug description:
  We get constantly failures like:
  2015-10-13 06:59:05.343 | 
==
  2015-10-13 06:59:05.343 | FAIL: 
glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time
  2015-10-13 06:59:05.344 | tags: worker-7
  2015-10-13 06:59:05.344 | 
--
  2015-10-13 06:59:05.344 | Traceback (most recent call last):
  2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  2015-10-13 06:59:05.344 | return func(*args, **keywargs)
  2015-10-13 06:59:05.344 |   File 
"glance/tests/unit/v2/test_tasks_resource.py", line 365, in 
test_create_with_live_time
  2015-10-13 06:59:05.344 | self.assertEqual(CONF.task.task_time_to_live, 
task_live_time_hour)
  2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  2015-10-13 06:59:05.344 | self.assertThat(observed, matcher, message)
  2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  2015-10-13 06:59:05.345 | raise mismatch_error
  2015-10-13 06:59:05.345 | testtools.matchers._impl.MismatchError: 48 != 47
  2015-10-13 06:59:05.345 | Ran 2855 tests in 298.484s
  2015-10-13 06:59:05.345 | FAILED (id=0, failures=1, skips=2)
  2015-10-13 06:59:05.345 | error: testr failed (1)
  2015-10-13 06:59:05.394 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-glance-python27/.tox/py27/bin/lockutils-wrapper 
python setup.py testr --slowest --testr-args='

  This is caused if the second of timestamp changes between created_at
  and updated_at gets created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505295] Re: Tox tests failing with AttributeError

2015-10-13 Thread Thierry Carrez
oslo.messaging 2.6.1 is out to supposedly fix this.

** No longer affects: oslo.messaging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505295

Title:
  Tox tests failing with AttributeError

Status in Cinder:
  In Progress
Status in Designate:
  Fix Committed
Status in neutron:
  Fix Committed
Status in OpenStack Compute (nova):
  In Progress
Status in openstack-ansible:
  In Progress

Bug description:
  Currently all tests run in Jenkins python27 and python34 are failing
  with an AttributeError, saying that "'str' has no attribute 'DEALER'",
  as well as an AssertionError on assert TRANSPORT is not None in
  cinder/rpc.py.

  An example of the full traceback of the failure can be found here:

   http://paste.openstack.org/show/476040/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1505295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505677] [NEW] oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-conductor log

2015-10-13 Thread Jesse Pretorius
Public bug reported:

In nova-conductor we're seeing the following error for stable/liberty:

2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, 
in object_class_action_versions
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, 
objname, objmethod, object_versions, args, kwargs)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 477, 
in object_class_action_versions
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher if 
isinstance(result, nova_object.NovaObject) else result)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
535, in obj_to_primitive
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
version_manifest)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
507, in obj_make_compatible_from_manifest
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher return 
self.obj_make_compatible(primitive, target_version)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/instance.py", line 1325, 
in obj_make_compatible
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
target_version)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/base.py", line 262, in 
obj_make_compatible
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
rel_versions = self.obj_relationships['objects']
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher KeyError: 
'objects'
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher

More details here:
http://logs.openstack.org/56/233756/8/check/gate-openstack-ansible-dsvm-commit/879f745/logs/aio1_nova_conductor_container-5ec67682/nova-conductor.log

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: openstack-ansible
 Importance: Critical
 Assignee: Jesse Pretorius (jesse-pretorius)
 Status: Confirmed

** Affects: oslo.versionedobjects
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: oslo.versionedobjects
   Importance: Undecided
   Status: New

** Description changed:

  In nova-conductor we're seeing the following error for stable/liberty:
  
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, 
objname, objmethod, object_versions, args, kwargs)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/ma

[Yahoo-eng-team] [Bug 1505681] [NEW] Hyper-V creates VM memory file on the local disk for each VM

2015-10-13 Thread Claudiu Belu
Public bug reported:

For each started VM, Hyper-V creates on the local disk a memory file for
each VM. The memory file has the same size as the assigned VM memory.
For example, if an instance with 4 GB ram starts, a 4 GB file is
created.

This can cause scheduling issues, especially on hosts which have large
quantities of memory, but  not a very large local disk, resulting in
instances failing to spawn due to "insufficient local disk".

** Affects: nova
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505681

Title:
  Hyper-V creates VM memory file on the local disk for each VM

Status in OpenStack Compute (nova):
  New

Bug description:
  For each started VM, Hyper-V creates on the local disk a memory file
  for each VM. The memory file has the same size as the assigned VM
  memory. For example, if an instance with 4 GB ram starts, a 4 GB file
  is created.

  This can cause scheduling issues, especially on hosts which have large
  quantities of memory, but  not a very large local disk, resulting in
  instances failing to spawn due to "insufficient local disk".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505675] Re: Flaky tasks test glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time

2015-10-13 Thread Flavio Percoco
** Also affects: glance/liberty
   Importance: Undecided
   Status: New

** Changed in: glance
Milestone: None => mitaka-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505675

Title:
  Flaky tasks test
  
glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time

Status in Glance:
  In Progress
Status in Glance liberty series:
  New

Bug description:
  We get constantly failures like:
  2015-10-13 06:59:05.343 | 
==
  2015-10-13 06:59:05.343 | FAIL: 
glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time
  2015-10-13 06:59:05.344 | tags: worker-7
  2015-10-13 06:59:05.344 | 
--
  2015-10-13 06:59:05.344 | Traceback (most recent call last):
  2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  2015-10-13 06:59:05.344 | return func(*args, **keywargs)
  2015-10-13 06:59:05.344 |   File 
"glance/tests/unit/v2/test_tasks_resource.py", line 365, in 
test_create_with_live_time
  2015-10-13 06:59:05.344 | self.assertEqual(CONF.task.task_time_to_live, 
task_live_time_hour)
  2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  2015-10-13 06:59:05.344 | self.assertThat(observed, matcher, message)
  2015-10-13 06:59:05.344 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  2015-10-13 06:59:05.345 | raise mismatch_error
  2015-10-13 06:59:05.345 | testtools.matchers._impl.MismatchError: 48 != 47
  2015-10-13 06:59:05.345 | Ran 2855 tests in 298.484s
  2015-10-13 06:59:05.345 | FAILED (id=0, failures=1, skips=2)
  2015-10-13 06:59:05.345 | error: testr failed (1)
  2015-10-13 06:59:05.394 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-glance-python27/.tox/py27/bin/lockutils-wrapper 
python setup.py testr --slowest --testr-args='

  This is caused if the second of timestamp changes between created_at
  and updated_at gets created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505701] [NEW] HEAD files are needed for conflict management

2015-10-13 Thread Ann Kamyshnikova
Public bug reported:

Some time ago we merged change https://review.openstack.org/#/c/227319/
that removes HEADS file. Validation of migration revisions using HEADS
file was replaced with pep8. This allows us to avoid merge conflicts
that appeared every time a new migration was merged.

The problem is that the original idea of HEAD file was not only to
validate revisions, so as not to allow outdated changes go into merge
queue, that could be very important for the end of the cycle when a lot
of patches get approved.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505701

Title:
  HEAD files are needed for conflict management

Status in neutron:
  New

Bug description:
  Some time ago we merged change
  https://review.openstack.org/#/c/227319/ that removes HEADS file.
  Validation of migration revisions using HEADS file was replaced with
  pep8. This allows us to avoid merge conflicts that appeared every time
  a new migration was merged.

  The problem is that the original idea of HEAD file was not only to
  validate revisions, so as not to allow outdated changes go into merge
  queue, that could be very important for the end of the cycle when a
  lot of patches get approved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505700] [NEW] Floating IPs disassociation does not remove conntrack state with HA routers

2015-10-13 Thread Assaf Muller
Public bug reported:

Reproduction:
1) Create HA router, connect to internal/external networks
2) Create VM, assign floating IP
3) Ping floating IP
4) Disassociate floating IP

Actual result:
Ping continues

Expected result:
Ping halts

Root cause:
Legacy routers floating IP disassociation delete conntrackd state, HA routers 
don't because they're sentient beings with a sense of self that choose to not 
follow common convention or reason.

** Affects: neutron
 Importance: Medium
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505700

Title:
  Floating IPs disassociation does not remove conntrack state with HA
  routers

Status in neutron:
  In Progress

Bug description:
  Reproduction:
  1) Create HA router, connect to internal/external networks
  2) Create VM, assign floating IP
  3) Ping floating IP
  4) Disassociate floating IP

  Actual result:
  Ping continues

  Expected result:
  Ping halts

  Root cause:
  Legacy routers floating IP disassociation delete conntrackd state, HA routers 
don't because they're sentient beings with a sense of self that choose to not 
follow common convention or reason.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505710] [NEW] Wrong logging setup in replicator

2015-10-13 Thread Mike Fedosin
Public bug reported:

The logging.setup accepts two parameters, the first one being the current CONF, 
the second parameter is the product name.
Currently in replicator it's not true.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505710

Title:
  Wrong logging setup in replicator

Status in Glance:
  New

Bug description:
  The logging.setup accepts two parameters, the first one being the current 
CONF, the second parameter is the product name.
  Currently in replicator it's not true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505708] [NEW] [Sahara] Page with node group configs long parsed

2015-10-13 Thread Evgeny Sikachev
Public bug reported:

ENVIRONMENT: devstack(13.10.2015)


STEPS TO REPRODUCE:
1. Navigate to "Node group templates"
2. Click on "Create template"
3. Select "Cloudera", "5.4.0"


RESULT:  Page long parsed (size ~10MB)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505708

Title:
  [Sahara] Page with node group configs long parsed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  ENVIRONMENT: devstack(13.10.2015)

  
  STEPS TO REPRODUCE:
  1. Navigate to "Node group templates"
  2. Click on "Create template"
  3. Select "Cloudera", "5.4.0"

  
  RESULT:  Page long parsed (size ~10MB)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2015-10-13 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

** Changed in: glance
Milestone: None => liberty-rc3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Committed
Status in Glance:
  Fix Released
Status in heat:
  Fix Committed
Status in Ironic:
  Fix Committed
Status in neutron:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Committed

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505715] [NEW] Functional tests falls, if global cfg.CONF tried to register the same option twice

2015-10-13 Thread Sergey Belous
Public bug reported:

There is an functional test, that register "metadata_proxy_socket" option in 
global cfg.CONF: 
https://github.com/openstack/neutron/blob/master/neutron/tests/functional/agent/l3/test_keepalived_state_change.py#L28
If we tried to run another functional test, that also should register 
"metadata_proxy_socket" option (for example new functional tests of dhcp-agent: 
https://review.openstack.org/#/c/136834/) and we will run all functional tests 
in the scope, one of them will be fall with error 
"oslo_config.cfg.DuplicateOptError: duplicate option: metadata_proxy_socket": 
http://paste.openstack.org/show/476026/
That because in test_keepalived_state_change test "metadata_proxy_socket" 
option register with 'Location of Metadata Proxy UNIX domain socket' help 
message, and functional test of dhcp tried to register "metadata_proxy_socket" 
option with "Location for Metadata Proxy UNIX domain socket." help message 
(yes, help messages are different), therefore second test falls with 
DuplicateOptError.

** Affects: neutron
 Importance: Undecided
 Assignee: Sergey Belous (sbelous)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sergey Belous (sbelous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505715

Title:
  Functional tests falls, if global cfg.CONF tried to register the same
  option twice

Status in neutron:
  New

Bug description:
  There is an functional test, that register "metadata_proxy_socket" option in 
global cfg.CONF: 
https://github.com/openstack/neutron/blob/master/neutron/tests/functional/agent/l3/test_keepalived_state_change.py#L28
  If we tried to run another functional test, that also should register 
"metadata_proxy_socket" option (for example new functional tests of dhcp-agent: 
https://review.openstack.org/#/c/136834/) and we will run all functional tests 
in the scope, one of them will be fall with error 
"oslo_config.cfg.DuplicateOptError: duplicate option: metadata_proxy_socket": 
http://paste.openstack.org/show/476026/
  That because in test_keepalived_state_change test "metadata_proxy_socket" 
option register with 'Location of Metadata Proxy UNIX domain socket' help 
message, and functional test of dhcp tried to register "metadata_proxy_socket" 
option with "Location for Metadata Proxy UNIX domain socket." help message 
(yes, help messages are different), therefore second test falls with 
DuplicateOptError.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492034] [NEW] nova network FlatDHCP (kilo) on XenServer 6.5 ebtables rules

2015-10-13 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://ask.openstack.org/en/question/62349/openstack-xenserver-65
-network-legacy-flat-dhcp-not-working/

On every instance creation a new  rule is pre-pended to ebtables that drops ARP 
packets from the bridge input.
 This causes a routing problem. Please look at the details in AskOpenStack link 
above.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
nova network FlatDHCP (kilo) on XenServer 6.5  ebtables rules 
https://bugs.launchpad.net/bugs/1492034
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492034] Re: nova network FlatDHCP (kilo) on XenServer 6.5 ebtables rules

2015-10-13 Thread Bob Ball
** Project changed: openstack-org => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492034

Title:
  nova network FlatDHCP (kilo) on XenServer 6.5  ebtables rules

Status in OpenStack Compute (nova):
  New

Bug description:
  https://ask.openstack.org/en/question/62349/openstack-xenserver-65
  -network-legacy-flat-dhcp-not-working/

  On every instance creation a new  rule is pre-pended to ebtables that drops 
ARP packets from the bridge input.
   This causes a routing problem. Please look at the details in AskOpenStack 
link above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2015-10-13 Thread Michael McCune
** Also affects: sahara
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Committed
Status in Glance:
  Fix Released
Status in heat:
  Fix Committed
Status in Ironic:
  Fix Committed
Status in neutron:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Committed
Status in Sahara:
  In Progress

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505356] Re: Can't upload images in Python 3.4

2015-10-13 Thread nikhil komawar
You can close this bug as it's not related to py34 (99% sure).

The invalid literal and the decoding errors usually happen during
hickups in the connection while trying to upload data.

Here's the conversation I had with Hemanth who sees them a lot in his
cloud.

---

10:44:48 nikhil
rosmaita:  hemanthm: stevemar_ pointed out the error we observe in the upload 
process -- ValueError: invalid literal for int() with base 16: '' . 
http://logs.openstack.org/33/188033/7/check/gate-heat-dsvm-functional-orig-mysql/9eaf0e2/logs/screen-g-api.txt.gz?level=ERROR
 . I remember it to be an issue with corruption of local data. Can any of you 
confirm? (The error happens at the wsgi level that's being used by swift store.)

10:45:47 hemanthm
nikhil: from what I've seen it's dom0 timing out

10:46:13 hemanthm
not saying there can't be other reaons for it

10:46:19 nikhil
hemanthm: in this case, it's doing a straight upload from data local node

10:46:40 nikhil
hemanthm: so, guess it's the local node getting intermittent hickups?

10:46:59 hemanthm
nikhil: possible

10:47:33 hemanthm
essentially, from a glance perspective the upload hasn't finished yet and it 
tries to read data from the input  stream

10:48:17 hemanthm
and when it can't read further data, it throws that and barfs

---

The error is being raised when eventlet.wsgi ((under glance_store's
swift store drivers' (py-swiftclient's) import modules)) tried to decode
the stream being sent and find inconsistency in the data stream.

I looked at some of the links shared on IRC and could not find evidence
of this being related to py34. Though, if this happens more often and
people are not able to get their patches in, I am definitely willing to
take a much more closer look.


** Changed in: glance
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505356

Title:
  Can't upload images in Python 3.4

Status in Glance:
  Opinion
Status in python-openstackclient:
  New

Bug description:
  Trying to upload images via the OSC fails with: 'ascii' codec can't
  decode byte 0xea in position 0: ordinal not in range(128)

  2015-10-09 16:37:17.960 | ++ openstack --os-cloud=devstack-admin image create 
cirros-0.3.4-x86_64-uec-kernel --public --container-format aki --disk-format aki
  2015-10-09 16:37:17.960 | ++ grep ' id '
  2015-10-09 16:37:17.960 | ++ get_field 2
  2015-10-09 16:37:17.961 | ++ local data field
  2015-10-09 16:37:17.961 | ++ read data
  2015-10-09 16:37:20.712 | 'ascii' codec can't decode byte 0xea in position 0: 
ordinal not in range(128)
  2015-10-09 16:37:20.848 | + kernel_id=
  2015-10-09 16:37:20.848 | + '[' -n 
/opt/stack/new/devstack/files/images/cirros-0.3.4-x86_64-uec/cirros-0.3.4-x86_64-initrd
 ']'
  2015-10-09 16:37:20.849 | ++ openstack --os-cloud=devstack-admin image create 
cirros-0.3.4-x86_64-uec-ramdisk --public --container-format ari --disk-format 
ari
  2015-10-09 16:37:20.850 | ++ grep ' id '
  2015-10-09 16:37:20.850 | ++ get_field 2
  2015-10-09 16:37:20.851 | ++ local data field
  2015-10-09 16:37:20.851 | ++ read data
  2015-10-09 16:37:22.459 | 'ascii' codec can't decode byte 0x8b in position 1: 
ordinal not in range(128)
  2015-10-09 16:37:22.555 | + ramdisk_id=
  2015-10-09 16:37:22.555 | + openstack --os-cloud=devstack-admin image create 
cirros-0.3.4-x86_64-uec --public --container-format ami --disk-format ami
  2015-10-09 16:37:24.055 | 'ascii' codec can't decode byte 0xcc in position 
1032: ordinal not in range(128)

  
  
http://logs.openstack.org/33/188033/7/check/gate-heat-dsvm-functional-orig-mysql/9eaf0e2/logs/devstacklog.txt.gz#_2015-10-09_16_37_17_960
 > FWIW, the gate job installs the python 3 version of all the clients when 
building devstack.

  Also, noticed some failures in glance trying to upload an image:
  http://logs.openstack.org/33/188033/7/check/gate-heat-dsvm-functional-
  orig-mysql/9eaf0e2/logs/screen-g-api.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457517] Re: Unable to boot from volume when flavor disk too small

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457517

Title:
  Unable to boot from volume when flavor disk too small

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Vivid:
  Fix Committed

Bug description:
  [Impact]

   * Without the backport, booting from volume requires flavor disk size
  larger than volume size, which is wrong. This patch skips flavor disk
  size checking when booting from volume.

  [Test Case]

   * 1. create a bootable volume
 2. boot from this bootable volume with a flavor that has disk size smaller 
than the volume size
 3. error should be reported complaining disk size too small
 4. apply this patch
 5. boot from that bootable volume with a flavor that has disk size smaller 
than the volume size again
 6. boot should succeed

  [Regression Potential]

   * none

  
  Version: 1:2015.1.0-0ubuntu1~cloud0 on Ubuntu 14.04

  I attempt to boot an instance from a volume:

  nova boot --nic net-id=[NET ID] --flavor v.512mb --block-device
  source=volume,dest=volume,id=[VOLUME
  ID],bus=virtio,device=vda,bootindex=0,shutdown=preserve vm

  This results in nova-api raising a FlavorDiskTooSmall exception in the
  "_check_requested_image" function in compute/api.py. However,
  according to [1], the root disk limit should not apply to volumes.

  [1] http://docs.openstack.org/admin-guide-cloud/content/customize-
  flavors.html

  Log (first line is debug output I added showing that it's looking at
  the image that the volume was created from):

  2015-05-21 10:28:00.586 25835 INFO nova.compute.api 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] image: {'min_disk': 0, 'status': 
'active', 'min_ram': 0, 'properties': {u'container_format': u'bare', 
u'min_ram': u'0', u'disk_format': u'qcow2', u'image_name': u'Ubuntu 14.04 
64-bit', u'image_id': u'cf0dffef-30ef-4032-add0-516e88048d85', 
u'libvirt_cpu_mode': u'host-passthrough', u'checksum': 
u'76a965427d2866f006ddd2aac66ed5b9', u'min_disk': u'0', u'size': u'255524864'}, 
'size': 21474836480}
  2015-05-21 10:28:00.587 25835 INFO nova.api.openstack.wsgi 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] HTTP exception thrown: Flavor's disk is 
too small for requested image.

  Temporary solution: I have special flavor for volume-backed instances so I 
just set the root disk on those to 0, but this doesn't work if volume are used 
on other flavors.
  Reproduce: create flavor with 1 GB root disk size, then try to boot an 
instance from a volume created from an image that is larger than 1 GB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482699] Re: glance requests from nova fail if there are too many endpoints in the service catalog

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482699

Title:
  glance requests from nova fail if there are too many endpoints in the
  service catalog

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Nova sends the entire serialized service catalog in the http header to
  glance requests:

  https://github.com/openstack/nova/blob/icehouse-
  eol/nova/image/glance.py#L136

  If you have a lot of endpoints in your service catalog this can make
  glance fail with "400 Header Line TooLong".

  Per bknudson: "Any service using the auth_token middleware has no use
  for the x-service-catalog header. All that auth_token middleware uses
  is x-auth-token. The auth_token middleware will actually strip the x
  -service-catalog from the request before it sends the request on to
  the rest of the pipeline, so the application will never see it."

  If glance needs the service catalog it will get it from keystone when
  it auths the tokens, so nova shouldn't be sending this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472946] Re: _poll_shelved_instances periodic task is offloading instances even if shelved_offload_time is -1

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472946

Title:
  _poll_shelved_instances periodic task is offloading instances even if
  shelved_offload_time is -1

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Invalid
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  shelved_offload_time -1 means instance will remain in the SHELVED state until 
user unshelves it.
  Current behavior of _poll_shelved_instances periodic task is not considering 
shelved_offload_time is configured as -1 and offloading all the instances which 
are in 'SHELVED' state.

  
  Stpes to reproduce:
  1. set shelved_offload_time to -1 and restart nova-compute service.
  2. create instance and shelve it using command, "nova shelve "
  3. verify instance is in SHELVED state and instance files are present in 
instance path.
  4. wait until  _poll_shelved_instances periodic task executes (default is 
3600 seconds, you can change it to 120 seconds)
  5. Once _poll_shelved_instances periodic task is executed, instance state 
changes to SHELVED_OFFLOADED and instance files are deleted from instance path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440762] Re: Rebuild an instance with attached volume fails

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440762

Title:
  Rebuild an instance with attached volume fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When trying to rebuild an instance with attached volume, it fails with
  the errors:

  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher 
libvirtError: Failed to terminate process 22913 with SIGKILL: Device or 
resource busy
  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher
  <180>Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host 
stats, it is trying to get disk info for instance-0003, but the backing 
volume block device was removed by concurrent operations such as resize. Error: 
No volume Block Device Mapping at path: 
/dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1
  <182>Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event)

  The full log of rebuild process is here:
  http://paste.openstack.org/show/166892/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466056] Re: Hyper-V: serial ports issue on Windows Threshold

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466056

Title:
  Hyper-V: serial ports issue on Windows Threshold

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
   On Windows Threshold, a new WMI class was introduced, targeting  VM
  serial ports.

   For this reason, attempting to retrieve serial port connections fails
  on Windows Threshold.

   This can easily be fixed by using the right WMI class when attempting
  to retrieve VM serial ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484738] Re: keyerror when refreshing instance security groups

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484738

Title:
  keyerror when refreshing instance security groups

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  On a clean kilo install using source security groups I am seeing the
  following trace on boot and delete


  a2413f7] Deallocating network for instance _deallocate_network 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2098
  2015-08-14 09:46:06.688 11618 ERROR oslo_messaging.rpc.dispatcher 
[req-b8f44d34-96b2-4e40-ac22-15ccc6e44e59 - - - - -] Exception during message 
handling: 'metadata'
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6772, in 
refresh_instance_security_rules
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher return 
self.manager.refresh_instance_security_rules(ctxt, instance)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 434, in 
decorated_function
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher args = 
(_load_instance(args[0]),) + args[1:]
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 425, in 
_load_instance
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
expected_attrs=metas)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 506, in 
_from_db_object
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
instance['metadata'] = utils.instance_meta(db_inst)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 817, in instance_meta
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher if 
isinstance(instance['metadata'], dict):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher KeyError: 
'metadata'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475356] Re: Serializer reports wrong supported version

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475356

Title:
  Serializer reports wrong supported version

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in oslo.versionedobjects:
  Fix Released

Bug description:
  The VersionedObjectSerializer is what calls object_backport in our
  indirection_api if we encounter an unsupported version. In order for
  this to work properly, we need to report the top-level object version
  that we're trying to deserialize, not the one we actually encountered.
  We depend on the conductor's object relationship mappings to guide us
  to a fully-supported object tree.

  Currently, the serializer is reporting the object that failed to
  deserialize, not the top.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462128] Re: Cells: DBReferenceError possible deleting unscheduled instance

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462128

Title:
  Cells: DBReferenceError possible deleting unscheduled instance

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  In cells, when a delete request comes in before an instance has been
  scheduled, cells will broadcast a "delete everywhere" to delete the
  instance in all cells. During the delete code path in compute,
  instance.save() attempts to save some state along the way before
  instance.destroy(). If there isn't an instance record in the child
  database, the FK constraint to save state (for example, flavor), will
  fail and DBReference error will be raised by oslo.db. In cells,
  InstanceNotFound is caught and handled for these scenarios. Since a FK
  constraint failure does mean the instance doesn't exist, it would make
  sense to raise InstanceNotFound for DBReferenceError.

  Logstash query:

  message:"DBReferenceError" AND build_name:"check-tempest-dsvm-cells"

  Example trace:

  http://logs.openstack.org/49/188249/2/check/check-tempest-dsvm-
  cells/a74468a/logs/screen-n-cell-child.txt.gz#_2015-06-04_07_22_45_806

  2015-06-04 07:22:45.806 ERROR nova.cells.messaging 
[req-388635a4-8e75-4986-af34-99e6eca82f3b ServersNegativeTestJSON-34383368 
ServersNegativeTestJSON-350156579] Error processing message locally
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging Traceback (most 
recent call last):
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 201, in _process_locally
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 1277, in 
_process_message_locally
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 1090, in 
instance_delete_everywhere
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging 
self.compute_api.delete(message.ctxt, instance)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 227, in wrapped
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging return 
func(self, context, target, *args, **kwargs)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 216, in inner
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging return 
function(self, context, instance, *args, **kwargs)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 244, in _wrapped
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging return fn(self, 
context, instance, *args, **kwargs)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 197, in inner
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging return f(self, 
context, instance, *args, **kw)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 1820, in delete
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging 
self._delete_instance(context, instance)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 1810, in _delete_instance
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging 
task_state=task_states.DELETING)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 1622, in _delete
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging quotas.rollback()
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging 
six.reraise(self.type_, self.value, self.tb)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 1538, in _delete
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging instance.save()
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/objects/base.py", line 205, in wrapper
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging return fn(self, 
*args, **kwargs)
  2015-06-04 07:22:45.806 16038 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 826, in save
  2015-0

[Yahoo-eng-team] [Bug 1463044] Re: Hyper-V: the driver fails to initialize on Windows Server 2008 R2

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463044

Title:
  Hyper-V: the driver fails to initialize on Windows Server 2008 R2

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The Hyper-V driver uses the Microsoft\Windows\SMB WMI namespace in
  order to handle SMB shares. The issue is that this namespace is not
  available on Windows versions prior to Windows Server 2012.

  For this reason, the Hyper-V driver fails to initialize on Windows
  Server 2008 R2.

  Trace: http://paste.openstack.org/show/271422/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461081] Re: SMBFS volume attach race condition

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461081

Title:
  SMBFS volume attach race condition

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When the SMBFS volume backend is used and a volume is detached, the
  according SMB share is detached if no longer used.

  This can cause issues if at the same time, a different volume stored
  on the same share is being attached as the according disk image will
  not be available.

  This affects the Libvirt driver as well as the Hyper-V one. In case of
  Hyper-V, the issue can easily be fixed by using the share path as a
  lock when performing attach/detach volume operations.

  Trace: http://paste.openstack.org/show/256096/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461065] Re: Security groups may break

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461065

Title:
  Security groups may break

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/171e5f8b127610d93a230a6f692d8fd5ea0d0301
  converted instance dicts to objects. There are cases for the security
  groups where these should still be dicts. This will cause update of
  security groups to break.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456064] Re: VMWare instance missing ip address when using config drive

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456064

Title:
  VMWare instance missing ip address when using config drive

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Having same reason with race bug: https://bugs.launchpad.net/nova/+bug/1249065
  http://status.openstack.org/elastic-recheck/index.html#1249065

  when vmware driver using config drive, the IP address maybe not get
  injected, because of the missing of instance nw_info cache.

  Here is related code in nova vmware driver, and config drive codes:

  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L671
  inst_md = instance_metadata.InstanceMetadata(instance,
   content=injected_files,
   extra_md=extra_md)

  https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L146
  # get network info, and the rendered network template
  if network_info is None:
  network_info = instance.info_cache.network_info

  in vmwareapi/vmop.py, the network_info is not passed to the instance metadata 
api, so metadata api will use instance.info_cache.network_info as the network 
info. But when instance.info_cache.network_info is missing, the network info 
will be empty, too.
  This is why sometimes, VMWare instances can not get IP address injected when 
using config drive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436187] Re: 'AttributeError' is getting raised while unshelving instance booted from volume

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436187

Title:
  'AttributeError' is getting raised while unshelving instance booted
  from volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  ‘AttributeError’ exception is getting raised while unshelving instance
  which is booted from volume

  Steps to reproduce:
  
  1.Create bootable volume
  2.Create instance from bootable volume
  3.Shelve instance
  4.Try to unshelve instance

  Error log on n-cpu service:
  ---

  2015-03-24 23:32:13.646 ERROR nova.compute.manager 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] Instance failed to spawn
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] Traceback (most recent call last):
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4368, in _unshelve_instance
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] block_device_info=block_device_info)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2342, in spawn
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] block_device_info)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/blockinfo.py", line 622, in get_disk_info
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] instance=instance)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/blockinfo.py", line 232, in 
get_disk_bus_for_device_type
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] disk_bus = 
image_meta.get('properties', {}).get(key)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] AttributeError: 'NoneType' object has no 
attribute 'get'
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]
  2015-03-24 23:32:13.649 DEBUG oslo_concurrency.lockutils 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] Lock 
"183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a" released by "do_unshelve_instance" :: 
held 1.182s from (pid=11271) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:456
  2015-03-24 23:32:13.650 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
9c227430eaf34c64b94f36661ef2ec8f from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.650 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
7329362a2cab48968ce31760bcac8628. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.696 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
d2388c787036413a9bcf95f55e38027b from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.696 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
c466ff4a11574ff3a1032e85f3d9bd87. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.746 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
0367b08fd7dd428ab8ef494bb42f1499 from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.746 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
b7395e5e66da4a47ba4132568713d4c4. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.789 DEBUG nova.openstack.common.periodic_task 
[req-db2bb34f-1f3d-4ac0-99d0-f6fe78f8393d None None] Running periodic task 
ComputeManager._poll_volume_usage from (pid=11271) run_periodic_tasks 
/opt/stack/nova/nova/openstack/common/periodic_task.py:219
  2015-03-24 23:32:13.789 DEBUG nova.openstack.common.loopin

[Yahoo-eng-team] [Bug 1459491] Re: Unexpected result when create server booted from volume

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459491

Title:
  Unexpected result when create server booted from volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Enviroment:
  flalvor :1 --- 1G disk.
  volume :aaa ,---2G,bootable,image:bbb,
  image:bbb 13M

  when boot from volume like this:
  nova boot --flavor 1 --nic net-id=xxx  --boot-volume aaa
  it will rasie an error: FlavorDiskTooSmall

  when boot from volum like this:
  nova boot --flavor 1 --nic net-id=xxx --block-device 
id=bbb,source=image,dest=volume,size=2,bootindex=0 test2
  it goes well.

  But,the second one is same with the first one.So,either the first or
  the second is unexcepted.

  I think the second one should raise 'FlavorDiskTooSmall' error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454451] Re: simultaneous boot of multiple instances leads to cpu pinning overlap

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454451

Title:
  simultaneous boot of multiple instances leads to cpu pinning overlap

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  I'm running into an issue with kilo-3 that I think is present in
  current trunk.  Basically it results in multiple instances (with
  dedicated cpus) being pinned to the same physical cpus.

  I think there is a race between the claimed CPUs of an instance being
  persisted to the DB, and the resource audit scanning the DB for
  instances and subtracting pinned CPUs from the list of available CPUs.

  The problem only shows up when the following sequence happens:
  1) instance A (with dedicated cpus) boots on a compute node
  2) resource audit runs on that compute node
  3) instance B (with dedicated cpus) boots on the same compute node

  So you need to either be booting many instances, or limiting the valid
  compute nodes (host aggregate or server groups), or have a small
  cluster in order to hit this.

  The nitty-gritty view looks like this:

  When booting up an instance we hold the COMPUTE_RESOURCE_SEMAPHORE in
  compute.resource_tracker.ResourceTracker.instance_claim() and that
  covers updating the resource usage on the compute node. But we don't
  persist the instance numa topology to the database until after
  instance_claim() returns, in
  compute.manager.ComputeManager._build_instance().  Note that this is
  done *after* we've given up the semaphore, so there is no longer any
  sort of ordering guarantee.

  compute.resource_tracker.ResourceTracker.update_available_resource()
  then aquires COMPUTE_RESOURCE_SEMAPHORE, queries the database for a
  list of instances and uses that to calculate a new view of what
  resources are available. If the numa topology of the most recent
  instance hasn't been persisted yet, then the new view of resources
  won't include any pCPUs pinned by that instance.

  compute.manager.ComputeManager._build_instance() runs for the next
  instance and based on the new view of available resources it allocates
  the same pCPU(s) used by the earlier instance. Boom, overlapping
  pinned pCPUs.

  
  Lastly, the same bug applies to the 
compute.manager.ComputeManager.rebuild_instance() case.  It uses the same 
pattern of doing the claim and then updating the instance numa topology after 
releasing the semaphore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459917] Re: Can't boot with bdm when use image in local

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459917

Title:
  Can't boot with bdm when use image in local

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  when boot vm with bdm like this:

  nova  boot --flavor 1 --nic net-id= --image 
  --block-device source=image,dest=local,id=,size=2,bootindex=0 test

  it raise a error:Mapping image to local is not supported.

  But in nova  it said :

    # if this bdm is generated from --image ,then
     # source_type = image and destination_type = local is allowed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454149] Re: self._event is None that causes "AttributeError: 'NoneType' object has no attribute 'pop'"

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454149

Title:
  self._event is None that causes "AttributeError: 'NoneType' object has
  no attribute 'pop'"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Here is the log:
  2015-05-11 17:17:50.655 14671 ERROR nova.compute.manager 
[req-ed95e1f2-11d3-404c-ac78-8c1d5e24bfbf ff514b152688486b9dd9752b3d67fa78 
689d7e1036e64e0fbf7fd8b4f51d2e57 - - -] [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] Setting instance vm_state to ERROR
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] Traceback (most recent call last):
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2784, in 
do_terminate_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] self._delete_instance(context, 
instance, bdms, quotas)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] rv = f(*args, **kwargs)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2753, in 
_delete_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] quotas.rollback()
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] six.reraise(self.type_, self.value, 
self.tb)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2716, in 
_delete_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] events = 
self.instance_events.clear_events_for_instance(instance)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 562, in 
clear_events_for_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] return _clear_events()
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in 
inner
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] return f(*args, **kwargs)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 561, in 
_clear_events
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] return 
self._events.pop(instance.uuid, {})
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] AttributeError: 'NoneType' object has no 
attribute 'pop'
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]

  is there anyway to avoid self._events to be not "None"? This might be
  a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392527] Re: [OSSA 2015-017] Deleting instance while resize instance is running leads to unuseable compute nodes (CVE-2015-3280)

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392527

Title:
  [OSSA 2015-017] Deleting instance while resize instance is running
  leads to unuseable compute nodes (CVE-2015-3280)

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Steps to reproduce:
  1) Create a new instance,waiting until it’s status goes to ACTIVE state
  2) Call resize API
  3) Delete the instance immediately after the task_state is “resize_migrated” 
or vm_state is “resized”
  4) Repeat 1 through 3 in a loop

  I have kept attached program running for 4 hours, all instances
  created are deleted (nova list returns empty list) but I noticed
  instances directories with the name “_resize> are not
  deleted from the instance path of the compute nodes (mainly from the
  source compute nodes where the instance was running before resize). If
  I keep this program running for couple of more hours (depending on the
  number of compute nodes), then it completely uses the entire disk of
  the compute nodes (based on the disk_allocation_ratio parameter
  value). Later, nova scheduler doesn’t select these compute nodes for
  launching new vms and starts reporting error "No valid hosts found".

  Note: Even the periodic tasks doesn't cleanup these orphan instance
  directories from the instance path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387543] Re: [OSSA 2015-015] Resize/delete combo allows to overload nova-compute (CVE-2015-3241)

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387543

Title:
  [OSSA 2015-015] Resize/delete combo allows to overload nova-compute
  (CVE-2015-3241)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  If user create instance, and resize it to larger flavor and than
  delete that instance, migration process does not stop. This allow
  user to repeat operation many times, causing overload to affected
  compute nodes over user quota.

  Affected installation: most drastic effect happens on 'raw-disk'
  instances without live migration. Whole raw disk (full size of the
  flavor) is copied during migration.

  If user delete instance it does not terminate rsync/scp keeping disk
  backing file opened regardless of removal by nova compute.

  Because rsync/scp of large disks is rather slow, it gives malicious
  user enough time to repeat that operation few hundred times, causing
  disk space depletion on compute nodes, huge impact on management
  network and so on.

  Proposed solution: abort migration (kill rsync/scp) as soon, as
  instance is deleted.

  Affected installation: Havana, Icehouse, probably Juno (not tested).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423772] Re: During live-migration Nova expects identical IQN from attached volume(s)

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423772

Title:
  During live-migration Nova expects identical IQN from attached
  volume(s)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When attempting to do a live-migration on an instance with one or more
  attached volumes, Nova expects that the IQN will be exactly the same
  as it's attaching the volume(s) to the new host. This conflicts with
  the Cinder settings such as "hp3par_iscsi_ips" which allows for
  multiple IPs for the purpose of load balancing.

  Example:
  An instance on Host A has a volume attached at 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  An attempt is made to migrate the instance to Host B.
  Cinder sends the request to attach the volume to the new host.
  Cinder gives the new host 
"/dev/disk/by-path/ip-10.10.120.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  Nova looks for the volume on the new host at the old location 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"

  The following error appears in n-cpu in this case:

  2015-02-19 17:09:05.574 ERROR nova.virt.libvirt.driver [-] [instance: 
b6fa616f-4e78-42b1-a747-9d081a4701df] Live Migration failure: Failed to open 
file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5426, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5393, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1582, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Removing descriptor: 3

  
  When looking at the nova DB, this is the state of block_device_mapping prior 
to the migration attempt:

  mysql> select * from block_device_mapping where 
instance_uuid='b6fa616f-4e78-42b1-a747-9d081a4701df' and deleted=0;
  
+-+-+++-+---+-+--+-+---+---+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id | device_name | 
delete_on_termination | snapshot_id | volume_id| 
volume_size | no_device | connection_info   





   

[Yahoo-eng-team] [Bug 1288039] Re: live-migration cinder boot volume target_lun id incorrect

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288039

Title:
  live-migration cinder boot volume target_lun id incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When nova goes to cleanup _post_live_migration on the source host, the
  block_device_mapping has incorrect data.

  I can reproduce this 100% of the time with a cinder iSCSI backend,
  such as 3PAR.

  This is a Fresh install on 2 new servers with no attached storage from Cinder 
and no VMs.
  I create a cinder volume from an image. 
  I create a VM booted from that Cinder volume.  That vm shows up on host1 with 
a LUN id of 0.
  I live migrate that vm.   The vm moves to host 2 and has a LUN id of 0.   The 
LUN on host1 is now gone.

  I create another cinder volume from image.
  I create another VM booted from the 2nd cinder volume.  The vm shows up on 
host1 with a LUN id of 0.  
  I live migrate that vm.  The VM moves to host 2 and has a LUN id of 1.  
  _post_live_migrate is called on host1 to clean up, and gets failures, because 
it's asking cinder to delete the volume
  on host1 with a target_lun id of 1, which doesn't exist.  It's supposed to be 
asking cinder to detach LUN 0.

  First migrate
  HOST2
  2014-03-04 19:02:07.870 WARNING nova.compute.manager 
[req-24521cb1-8719-4bc5-b488-73a4980d7110 admin admin] pre_live_migrate: 
{'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 
'mount_device': u'vda', 'connection_info': {u'd
  river_volume_type': u'iscsi', 'serial': 
u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260'
  , u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 
'device_type': u'disk', 'delete_on_termination': False}]}
  HOST1
  2014-03-04 19:02:16.775 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi',
   u'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': 
{u'target_discovered': True, u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}



  Second Migration
  This is in _post_live_migration on the host1.  It calls libvirt's driver.py 
post_live_migration with the volume information returned from the new volume on 
host2, hence the target_lun = 1.   It should be calling libvirt's driver.py to 
clean up the original volume on the source host, which has a target_lun = 0.
  2014-03-04 19:24:51.626 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi', u'serial': 
u'f0087595-804d-4bdb-9bad-0da2166313ea', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 1, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495388] Re: The instance hostname didn't match the RFC 952 and 1123's definition

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495388

Title:
  The instance hostname didn't match the RFC 952 and 1123's definition

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The instance hostname is convert from instance's name. There is method
  used to do that
  https://github.com/openstack/nova/blob/master/nova/utils.py#L774

  But looks like this method didn't match all the cases described in the
  RFC

  For example, if the host name just one character, like 'A', this
  method return 'A‘ also, this isn't allowed by RFC.

  And the hostname was updated at wrong place: 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L641
  It just update the instance db entry again after instance entry creation. We 
can populate the hostname before instance creation, then we can save one db 
operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493694] Re: On compute restart, quotas are not updated when instance vm_state is 'DELETED' but instance is not destroyed in db

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493694

Title:
  On compute restart, quotas are not updated when instance vm_state is
  'DELETED' but instance is not destroyed in db

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This is a timing issue and can occur if instance delete call reaches
  to _delete_instance method in nova/compute/manager.py module and nova-
  compute crashes after setting instance vm_state to 'DELETED' but
  before destroying the instance in db.

  Now on restarting nova-compute service, _init_instance method call
  checks whether instance vm_state is 'DELETED' or not, if yes, then it
  tries to call _complete_partial_deletion method and destroys instance
  in db then raises "ValueError: Circular reference detected" and quota
  was not updated for that instance which is not as expected.

  Steps to reproduce:
  1) Put a break point in nova/compute/manager.py module in _delete_instance 
method, just after updating instance vm_state to 'DELETED' but before 
destroying instance in db.
  2) Create instance and wait until instance vm_state become 'ACTIVE'.
  $ nova boot --image  --flavor  

  3) Send request to delete instance.
  $ nova delete 

  4) When delete request reaches to break point in nova-compute, make sure 
instance vm_state is marked as 'DELETED' and stop the nova-compute service.
  5) Restart nova-compute service and in _init_instance method call below error 
(ValueError: Circular reference detected) will be raised and instance will be 
marked as deleted in db but quota for that instance will never be updated.

  2015-09-08 00:36:34.133 ERROR nova.compute.manager 
[req-3222b8a4-0542-48cf-a2e1-c92e5fd91e5e None None] [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Failed to complete a deletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Traceback (most recent call last):
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/opt/stack/nova/nova/compute/manager.py", line 952, in _init_instance
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] 
self._complete_partial_deletion(context, instance)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/opt/stack/nova/nova/compute/manager.py", line 879, in _complete_partial_d
  eletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] bdms = 
objects.BlockDeviceMappingList.get_by_instance_uuid(
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", lin
  e 197, in wrapper
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] ctxt, self, fn.__name__, args, kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 246, in object_action
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] objmethod=objmethod, args=args, 
kwargs=kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=self.retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] timeout=timeout, retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 399, in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] msg = rpc_common.serialize_msg(msg)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9

[Yahoo-eng-team] [Bug 1491511] Re: Behavior change with latest nova paste config

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491511

Title:
  Behavior change with latest nova paste config

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  http://logs.openstack.org/55/219655/1/check/gate-shade-dsvm-
  functional-nova/1154770/console.html#_2015-09-02_12_10_56_113

  This started failing about 12 hours ago. Looking at it with Sean, we
  think it's because it actually never worked, but nova was failing
  silent before. It's not throwing an error, which wile more correcter
  (you know you didn't delete something) is a behavior change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494653] Re: Remove unnecessary 'context' parameter from quotas reserve method call

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1494653

Title:
  Remove unnecessary 'context' parameter from quotas reserve method call

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  In patch [1] 'context' parameter was removed from quota-related remotable 
method signatures.
  In patch [2] use of 'context' parameter was removed from quota-related 
remotable method calls.

  Still there are some occurrances where 'context' parameter is passed
  to quotas.reserve method which is leading to error "ValueError:
  Circular reference detected".

  For eg: while restarting nova-compute if there are any instance
  whose vm_state is 'DELETED' but that instance is not marked as deleted in db. 
In that case, while calling _init_instance method it raises below error.

  2015-09-08 00:36:34.133 ERROR nova.compute.manager 
[req-3222b8a4-0542-48cf-a2e1-c92e5fd91e5e None None] [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Failed to complete a deletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Traceback (most recent call last):
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/opt/stack/nova/nova/compute/manager.py", line 952, in _init_instance
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] self._complete_partial_deletion(context, 
instance)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/opt/stack/nova/nova/compute/manager.py", line 879, in _complete_partial_d
  eletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] bdms = 
objects.BlockDeviceMappingList.get_by_instance_uuid(
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", lin
  e 197, in wrapper
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] ctxt, self, fn.__name__, args, kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 246, in object_action
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] objmethod=objmethod, args=args, 
kwargs=kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=self.retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] timeout=timeout, retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 399, in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] msg = rpc_common.serialize_msg(msg)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/common.py", 
line 286, in serialize_msg
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] _MESSAGE_KEY: jsonutils.dumps(raw_msg)}
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 
185, in dumps
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] return json.dumps(obj, default=default, 
**kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/lib/python2.7/json/__init__.py", line 250, in dumps
  2015-09-08 00:36:34.133 TRACE nova.compute.ma

[Yahoo-eng-team] [Bug 1475411] Re: During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins 
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge "Port crypto to Python 3"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471810] Re: Support host type specific block volume attachment

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471810

Title:
  Support host type specific block volume attachment

Status in Cinder:
  Invalid
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  
  The IBM DS8000 storage subsystem supports different host types for 
Fibre-Channel. When LUNs are
  mapped to host ports, the user has to specify the LUN format to be used, as 
well as the Volume Group address type. If those properties are not set 
correctly, the host operating system will be unable to detect or use those LUNs 
(volumes).

  A LUN with LUN ID 1234, for example, will be addressed from AIX, or
  System z using LUN 0x40124034 (0x40LL40LL00..00). Linux on
  Intel addresses the LUN by 0x1234. That means, the storage
  subsystem is aware of the host architecture (platform, and Operating
  System).

  The Cinder driver thus needs to set the host type to 'System z' on the
  DS8000 storage subsystem when a Nova running on System z requests
  Cinder to attach a volume. Accordingly, the Cinder driver needs to set
  the host type to 'Intel - Linux' when a Nova running on an Intel
  compute node is requesting Cinder to attach a volume.

  The Cinder driver currently does not have any awareness about the host 
type/operating system when attaching a volume to a host. Nove currently creates 
a connector. And passes it to Cinder when requesting Cinder to attach a volume. 
The connector only provides information, such as the hosts WWPNs. Nova should 
add the output of platform.machine() and sys.platform to
  the connector. Cinder will pass this information to the Cinder driver for the 
storage back-end. The Cinder driver can then determine (in the example of a 
DS8000) the correct host type to be used. 

  Required changes are relatively small: in ``nova/virt/libvirt/driver.py``: 
add output of ``platform.machine()`` and
  ``sys.platform`` to the connector when it is created in 
``get_volume_connector``.

  Note, that similar changes have been released for Cinder already. When
  Cinder needs to attach a volume to it's host/hypervisor, it also
  creates a connector and passes it to the Cinder driver. Those changes
  have been merged by the Cinder team already. They are addressed by
  https://review.openstack.org/192558

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1471810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481812] Re: nova servers pagination does not work with changes-since and deleted marker

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481812

Title:
  nova servers pagination does not work with changes-since and deleted
  marker

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  instance test1 - test6, where test2 and test5 has been deleted:

  # nova list
  
+--+---+-++-+--+
  | ID   | Name  | Status  | Task State | Power 
State | Networks |
  
+--+---+-++-+--+
  | 7e12d6a0-126f-44d0-b566-15cd5e4ab82e | test1 | SHUTOFF | -  | 
Shutdown| private=10.0.0.3 |
  | 8b35f7fb-65d0-4fc3-ac22-390743c695db | test3 | ACTIVE  | -  | 
Running | private=10.0.0.5 |
  | 2ab70dfe-2983-4886-a930-7deb15279763 | test4 | ACTIVE  | -  | 
Running | private=10.0.0.6 |
  | 489e22cf-5e22-43a4-8c46-438f62d66e59 | test6 | ACTIVE  | -  | 
Running | private=10.0.0.8 |
  
+--+---+-++-+--+

  # Get all instances with changes-since=2015-01-01 :
  # curl -s -H "X-Auth-Token:92ecba357e5b49f88a21cedfa63bf36e" 
'http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01';
  {"servers": [{"id": "489e22cf-5e22-43a4-8c46-438f62d66e59", "links": 
[{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "bookmark"}], "name": "test6"}, {"id": 
"9bda60eb-6ff7-4b84-b081-0120b62155a3", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "bookmark"}], "name": "test5"}, {"id": 
"2ab70dfe-2983-4886-a930-7deb15279763", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/2ab70dfe-2983-4886-a930-7deb15279763";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/2ab70dfe-2983-4886-a
 930-7deb15279763", "rel": "bookmark"}], "name": "test4"}, {"id": 
"8b35f7fb-65d0-4fc3-ac22-390743c695db", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/8b35f7fb-65d0-4fc3-ac22-390743c695db";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/8b35f7fb-65d0-4fc3-ac22-390743c695db";,
 "rel": "bookmark"}], "name": "test3"}, {"id": 
"18d9ffbb-e1d4-4218-bb66-f792aab4e091", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/18d9ffbb-e1d4-4218-bb66-f792aab4e091";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/18d9ffbb-e1d4-4218-bb66-f792aab4e091";,
 "rel": "bookmark"}], "name": "test2"}, {"id": 
"7e12d6a0-126f-44d0-b566-15cd5e4ab82e", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/7e12d6a0-126f-44d0-b566-15cd5e4ab82e";,
 "rel": "self"}, {"href": "http://10.10.180.210:8774/30d2b54aa8f64bc2a
 1577c992c16271a/servers/7e12d6a0-126f-44d0-b566-15cd5e4ab82e", "rel": 
"bookmark"}], "name": "test1"}]}

  # query the instances in junks of 2 with changes-since and limit=2

  # curl -s -H "X-Auth-Token:92ecba357e5b49f88a21cedfa63bf36e" 
'http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01&limit=2';
  {"servers_links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01&limit=2&marker=9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "next"}], "servers": [{"id": "489e22cf-5e22-43a4-8c46-438f62d66e59", 
"links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "bookmark"}], "name": "test6"}, {"id": 
"9bda60eb-6ff7-4b84-b081-0120b62155a3", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "bookmark"}], "name": "test5"}]}

  => returns instance test6 and test5(deleted)

  # use insta

[Yahoo-eng-team] [Bug 1470690] Re: No 'OS-EXT-VIF-NET' extension in v2.1

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470690

Title:
  No 'OS-EXT-VIF-NET' extension in v2.1

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  V2 APi has extension for virtual interface 'OS-EXT-VIF-NET' but it is
  not present in v2.1 API.

  Because of this there is difference between v2 and v2.1 response of
  virtual interface API.

  v2 List virtual interface Response (with all extension enable)

  {
  "virtual_interfaces": [
  {
  "id": "%(id)s",
  "mac_address": "%(mac_addr)s",
  "OS-EXT-VIF-NET:net_id": "%(id)s"
  }
  ]
  }

  v2.1 List virtual interface Response

  {
  "virtual_interfaces": [
  {
  "id": "%(id)s",
  "mac_address": "%(mac_addr)s"
  }
  ]
  }

  As v2.1 is released in kilo, we should backport this fix to kilo
  branch also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID >255 not recognized

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID >255 not recognized

Status in Cinder:
  Invalid
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID >255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs <=0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

  Note: The change for Liberty os-brick is ready. I would also like to
  patch it back to Kilo. Since os-brick has been integrated with
  Liberty, but was separate before, I need to release a patch for Nova,
  Cinder, and os-brick. Unfortunately there is no option on this page to
  nominate the patch for Kilo. Can somebody help? Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487522] Re: Objects: obj_reset_changes signature doesn't match

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Confirmed

** Changed in: nova/kilo
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487522

Title:
  Objects: obj_reset_changes signature doesn't match

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  If an object contains a Flavor object within it and obj_reset_changes
  is called with recursive=True it will fail with the following error.
  This is because Flavor.obj_reset_changes is missing the recursive
  param in it's signature.  The Instance object is also missing this
  parameter in its method.

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/objects/test_request_spec.py", line 284, in 
test_save
  req_obj.obj_reset_changes(recursive=True)
File "nova/objects/base.py", line 224, in obj_reset_changes
  value.obj_reset_changes(recursive=True)
  TypeError: obj_reset_changes() got an unexpected keyword argument 
'recursive'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466305] Re: Booting from volume nolonger can be bigger that flavor size

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466305

Title:
  Booting from volume nolonger can be bigger that flavor size

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Upgrading to Juno you can no longer boot a volume that is bigger than
  the flavours disk size.

  There should be no need to take this into account when using a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465446] Re: Hyper-V: After live migration succeded, the only instance dirs on the source host are not cleaned up

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465446

Title:
  Hyper-V: After live migration succeded, the only instance dirs on the
  source host are not cleaned up

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  After the instance has succesfully live migrated to a new host, the
  instance dirs on the source host should be removed. Not doing so can
  cause useless clutter and used disk on the source node. This issue is
  more notably when hundreds, thousands of instances were deployed to a
  host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467451] Re: Hyper-V: fail to detach virtual hard disks

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467451

Title:
  Hyper-V: fail to detach virtual hard disks

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Nova Hyper-V driver fails to detach  virtual hard disks when using the
  virtualizaton v1 WMI namespace.

  The reason is that it cannot find the attached resource, using the
  wrong resource object connection attribute.

  This affects Windows Server 2008 as well as Windows Server 2012 when
  the old namespace is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465443] Re: Hyper-V: Live migration does not copy configdrive to new host

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465443

Title:
  Hyper-V: Live migration does not copy configdrive to new host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Performing a live migration on Hyper-V does not copy the configdrive
  to the destination. This can cause trouble, since the configdrive is
  esential. For example, performing a second live migration on the same
  instance will automatically result in an exception, since it tries to
  copy the configdrive (file does not exist) to another destination.

  This is caused by incorrectly copying the configdrive (wrong
  destination path).

  Log sample, after a LOG.info was introduced, in order to observe the
  error:

  2015-06-15 15:43:31.242 14768 INFO nova.virt.hyperv.pathutils 
[req-a85a92e9-b562-4398-b2ae-8ccbf2d1f525 70a2dc588be9409c9aea370aa119391f 
19c78e5db79444e7ac33c5af18ae29fc - - -] Copy file from 
C:\OpenStack\Instances\instance-5970\configdrive.iso to weighty-secreta
  2015-06-15 15:43:31.273 14768 INFO nova.virt.hyperv.serialconsoleops 
[req-a85a92e9-b562-4398-b2ae-8ccbf2d1f525 70a2dc588be9409c9aea370aa119391f 
19c78e5db79444e7ac33c5af18ae29fc - - -] Stopping instance instance-5970 
serial console handler.
  2015-06-15 15:43:31.289 14768 INFO nova.virt.hyperv.pathutils 
[req-a85a92e9-b562-4398-b2ae-8ccbf2d1f525 70a2dc588be9409c9aea370aa119391f 
19c78e5db79444e7ac33c5af18ae29fc - - -] Copy file from 
C:\OpenStack\Instances\instance-5970\console.log to 
\\weighty-secreta\C$\OpenStack\Instances\instance-5970\console.log

  The log sample shows the incorrect copy of configdrive.iso from the
  source ``C:\OpenStack\Instances\instance-5970\configdrive.iso`` to
  the destination ``weighty-secreta``, which is incorrect (correct:
  ``\\weighty-
  secreta\C$\OpenStack\Instances\instance-5970\configdrive.iso``) .
  The console.log file paths are correct and it is copied correctly.

  Even though the configdrive.iso destination is wrong, the copy
  operation is completed succesfully, which is why no exception is
  raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464298] Re: default hash function and hash format changed in OpenSSH 6.8 (ssh-keygen)

2015-10-13 Thread Chuck Short
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464298

Title:
  default hash function and hash format changed in OpenSSH 6.8 (ssh-
  keygen)

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The following tests fail on Fedora 22 because ssh-keygen output
  changed in OpenSSH 6.8:

  * nova.tests.unit.api.ec2.test_cloud.CloudTestCase.test_import_key_pair
  * nova.tests.unit.compute.test_keypairs.ImportKeypairTestCase.test_success_ssh

  Before OpenSSH used MD5 and hex with colons to display a fingerprint.
  It now uses SHA256 encoded to base64:

  """
   * Add FingerprintHash option to ssh(1) and sshd(8), and equivalent
 command-line flags to the other tools to control algorithm used
 for key fingerprints. The default changes from MD5 to SHA256 and
 format from hex to base64.
  """
  http://www.openssh.com/txt/release-6.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471810] Re: Support host type specific block volume attachment

2015-10-13 Thread Chuck Short
** Changed in: cinder/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471810

Title:
  Support host type specific block volume attachment

Status in Cinder:
  Invalid
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  
  The IBM DS8000 storage subsystem supports different host types for 
Fibre-Channel. When LUNs are
  mapped to host ports, the user has to specify the LUN format to be used, as 
well as the Volume Group address type. If those properties are not set 
correctly, the host operating system will be unable to detect or use those LUNs 
(volumes).

  A LUN with LUN ID 1234, for example, will be addressed from AIX, or
  System z using LUN 0x40124034 (0x40LL40LL00..00). Linux on
  Intel addresses the LUN by 0x1234. That means, the storage
  subsystem is aware of the host architecture (platform, and Operating
  System).

  The Cinder driver thus needs to set the host type to 'System z' on the
  DS8000 storage subsystem when a Nova running on System z requests
  Cinder to attach a volume. Accordingly, the Cinder driver needs to set
  the host type to 'Intel - Linux' when a Nova running on an Intel
  compute node is requesting Cinder to attach a volume.

  The Cinder driver currently does not have any awareness about the host 
type/operating system when attaching a volume to a host. Nove currently creates 
a connector. And passes it to Cinder when requesting Cinder to attach a volume. 
The connector only provides information, such as the hosts WWPNs. Nova should 
add the output of platform.machine() and sys.platform to
  the connector. Cinder will pass this information to the Cinder driver for the 
storage back-end. The Cinder driver can then determine (in the example of a 
DS8000) the correct host type to be used. 

  Required changes are relatively small: in ``nova/virt/libvirt/driver.py``: 
add output of ``platform.machine()`` and
  ``sys.platform`` to the connector when it is created in 
``get_volume_connector``.

  Note, that similar changes have been released for Cinder already. When
  Cinder needs to attach a volume to it's host/hypervisor, it also
  creates a connector and passes it to the Cinder driver. Those changes
  have been merged by the Cinder team already. They are addressed by
  https://review.openstack.org/192558

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1471810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505777] [NEW] inconsistent support for optional dependencies

2015-10-13 Thread Matthew Edmonds
Public bug reported:

keystone's requirements.txt includes several things that are really
optional dependencies, only needed if you are using certain features.
These should be moved out of requirements.txt and handled as extras in
setup.cfg. A few of these that I've noticed are:

passlib (only needed for the sql identity backend)
oauthlib
pysaml2

This is already done for several things:

python-ldap
ldappool
python-memcached
pymongo
bandit

We ought to be consistent. Moving optional dependencies to setup.cfg is
both a) correct and b) will allow those who do not need certain features
to package/ship/install less. This is important for applications based
on OpenStack that don't want the extra work of
building/shipping/installing/supporting things which are not necessary
for their application. It's important for users that shouldn't have to
install things they don't need. Etc.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1505777

Title:
  inconsistent support for optional dependencies

Status in Keystone:
  New

Bug description:
  keystone's requirements.txt includes several things that are really
  optional dependencies, only needed if you are using certain features.
  These should be moved out of requirements.txt and handled as extras in
  setup.cfg. A few of these that I've noticed are:

  passlib (only needed for the sql identity backend)
  oauthlib
  pysaml2

  This is already done for several things:

  python-ldap
  ldappool
  python-memcached
  pymongo
  bandit

  We ought to be consistent. Moving optional dependencies to setup.cfg
  is both a) correct and b) will allow those who do not need certain
  features to package/ship/install less. This is important for
  applications based on OpenStack that don't want the extra work of
  building/shipping/installing/supporting things which are not necessary
  for their application. It's important for users that shouldn't have to
  install things they don't need. Etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID >255 not recognized

2015-10-13 Thread Chuck Short
** Changed in: cinder/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID >255 not recognized

Status in Cinder:
  Invalid
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID >255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs <=0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

  Note: The change for Liberty os-brick is ready. I would also like to
  patch it back to Kilo. Since os-brick has been integrated with
  Liberty, but was separate before, I need to release a patch for Nova,
  Cinder, and os-brick. Unfortunately there is no option on this page to
  nominate the patch for Kilo. Can somebody help? Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480196] Re: Request-id is not getting returned if glance throws 500 error

2015-10-13 Thread Chuck Short
** Changed in: glance/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480196

Title:
  Request-id is not getting returned if glance throws 500 error

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released

Bug description:
  If glance throws Internal Server Error (500) for some reason,
  then in that case 'request-id' is not getting returned in response headers.

  Request-id is required to analyse logs effectively on failure and it should be
  returned from headers.

  For ex. -

  image-create api returns 500 error if property name exceeds 255 characters
  (fix for this issue is in progress : https://review.openstack.org/#/c/203948/)

  curl command:

  $ curl -g -i -X POST -H 'Accept-Encoding: gzip, deflate' -H 'x-image-
  meta-container_format: ami' -H 'x-image-meta-property-
  
:
  jskg' -H 'Accept: */*' -H 'X-Auth-Token:
  b94bd7b3a0fb4fada73fe170fe7d49cb' -H 'Connection: keep-alive' -H 'x
  -image-meta-is_public: None' -H 'User-Agent: python-glanceclient' -H
  'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format:
  ami' http://10.69.4.173:9292/v1/images

  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 0
  Date: Fri, 31 Jul 2015 08:27:31 GMT
  Connection: close

  Here request-id is not part of response header.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478690] Re: Request ID has a double req- at the start

2015-10-13 Thread Chuck Short
** Changed in: glance/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1478690

Title:
  Request ID has a double req- at the start

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Committed

Bug description:
  ➜  vagrant git:(master) http http://192.168.121.242:9393/v1/search 
X-Auth-Token:$token query:='{"match_all" : {}}'
  HTTP/1.1 200 OK
  Content-Length: 138
  Content-Type: application/json; charset=UTF-8
  Date: Mon, 27 Jul 2015 20:21:31 GMT
  X-Openstack-Request-Id: req-req-0314bf5b-9c04-4bed-bf86-d2e76d297a34

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1478690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484086] Re: ec2tokens authentication is failing during Heat tests

2015-10-13 Thread Chuck Short
** Changed in: heat/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1484086

Title:
  ec2tokens authentication is failing during Heat tests

Status in heat:
  Fix Released
Status in heat kilo series:
  Fix Released
Status in Keystone:
  Incomplete

Bug description:
  As seen here for example: http://logs.openstack.org/54/194054/37/check
  /gate-heat-dsvm-functional-orig-mysql/a812f55/

  We're getting the error: "Non-default domain is not supported" which
  seems to have been introduced here:
  https://review.openstack.org/#/c/208069/

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1484086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447215] Re: Schema Missing kernel_id, ramdisk_id causes #1447193

2015-10-13 Thread Chuck Short
** Changed in: glance/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in glance source package in Vivid:
  Fix Released

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505781] [NEW] Unexpected SNAT behavior between instances when SNAT disabled on router

2015-10-13 Thread James Denton
Public bug reported:

= Scenario =

• Kilo/Juno
• Single Neutron router with enable_snat=false
• two instances in two tenant networks attached to router
• each instance has a floating IP

INSTANCE A: TestNet1=192.167.7.3, 10.1.1.7
INSTANCE B: TestNet2=10.0.8.3, 10.1.1.6

When instances communicate out (ie. to the Internet), they are properly
SNAT'd using their respective floating IP. If an instance does not have
a floating IP, the traffic is routed out without SNAT.

When instances in tenant networks behind the same router communicate via
their fixed IPs, the source address is SNAT'd as the respective floating
IP while the destination is unmodified:

Pinging from INSTANCE A to INSTANCE B:

$ ping 10.0.8.3 -c1
PING 10.0.8.3 (10.0.8.3): 56 data bytes
64 bytes from 10.0.8.3: seq=0 ttl=63 time=7.483 ms

>From the Neutron router:

root@controller01:~# ip netns exec qrouter-dd15e8f3-8612-4925-81d4-88fcad49807f 
tcpdump -i any -ne icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
10:37:48.840404: 192.167.7.3 > 10.0.8.3: ICMP echo request, id 37121, seq 12, 
length 64
10:37:48.840467: 10.1.1.7 > 10.0.8.3: ICMP echo request, id 37121, seq 12, 
length 64 <-- SNAT as FLOAT
10:37:48.842506: 10.0.8.3 > 10.1.1.7: ICMP echo reply, id 37121, seq 12, length 
64
10:37:48.842565: 10.0.8.3 > 192.167.7.3: ICMP echo reply, id 37121, seq 12, 
length 64

This behavior has a negative effect for a couple of reasons:

1. The expectation is that traffic between the two instances behind the same 
router using fixed IPs would not be source NAT'd
2. Security group rules that use 'Remote Security Group' rather than 'Remote IP 
Prefix' fail to work since the source address is modified

When SNAT is enabled on the router, traffic between the instances via
their fixed IP works as expected:

>From INSTANCE A to B:

$ ping 10.0.8.3 -c 1
PING 10.0.8.3 (10.0.8.3): 56 data bytes
64 bytes from 10.0.8.3: seq=0 ttl=63 time=8.024 ms

>From the Neutron router:

root@controller01:~# ip netns exec qrouter-dd15e8f3-8612-4925-81d4-88fcad49807f 
tcpdump -i any -ne icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
10:52:19.945863: 192.167.7.3 > 10.0.8.3: ICMP echo request, id 39425, seq 0, 
length 64
10:52:19.945953: 192.167.7.3 > 10.0.8.3: ICMP echo request, id 39425, seq 0, 
length 64
10:52:19.951498: 10.0.8.3 > 192.167.7.3: ICMP echo reply, id 39425, seq 0, 
length 64
10:52:19.951554: 10.0.8.3 > 192.167.7.3: ICMP echo reply, id 39425, seq 0, 
length 64

We believe the existence of the following iptables nat rule causes the
desired behavior, in that traffic not traversing the qg interface is not
NAT'd:

-A neutron-l3-agent-POSTROUTING ! -i qg-80aa20be-9b ! -o qg-80aa20be-9b
-m conntrack ! --ctstate DNAT -j ACCEPT

That rule only exists when SNAT is *enabled* on the router, and not when
it is disabled, as shown below:

SNAT enabled:

-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 10.1.1.6/32 -j DNAT --to-destination 10.0.8.3
-A neutron-l3-agent-OUTPUT -d 10.1.1.7/32 -j DNAT --to-destination 192.167.7.3
-A neutron-l3-agent-POSTROUTING ! -i qg-80aa20be-9b ! -o qg-80aa20be-9b -m 
conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 10.1.1.6/32 -j DNAT --to-destination 10.0.8.3
-A neutron-l3-agent-PREROUTING -d 10.1.1.7/32 -j DNAT --to-destination 
192.167.7.3
-A neutron-l3-agent-float-snat -s 10.0.8.3/32 -j SNAT --to-source 10.1.1.6
-A neutron-l3-agent-float-snat -s 192.167.7.3/32 -j SNAT --to-source 10.1.1.7
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-80aa20be-9b -j SNAT --to-source 10.1.1.5
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j 
SNAT --to-source 10.1.1.5
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on 
outgoing traffic." -j neutron-l3-agent-snat

SNAT disabled:

-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 10.1.1.6/32 -j DNAT --to-destination 10.0.8.3
-A neutron-l3-agent-OUTPUT -d 10.1.1.7/32 -j DNAT --to-destination 192.167.7.3
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp 
--dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 10.1.1.6/32 -j DNAT --to-destination 10.0.8.3
-A neutron-l3-agent-PREROUTING -d 10.1.1.7/32 -j DNAT --to-destination 
192.167.7.3
-A neutron-l3-agent-float-snat -s 10.0.8.3/32 -j SNAT --to-source 10.1.1.6
-A neutron-l3-agent-float-snat -s 192.167.7.3/32 -j SNAT --to-source 10.1.1.7
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-

[Yahoo-eng-team] [Bug 1381413] Re: Switch Region dropdown doesn't work

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381413

Title:
  Switch Region dropdown doesn't work

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  In case Horizon was set up to work with multiple regions (by editing
  AVAILABLE_REGIONS in settings.py), region selector drop-down appears
  in top right corner. But it doesn't work now.

  Suppose I login into the Region1, then if I try to switch to Region2,
  it redirects me to the login view of django-openstack-auth
  
https://github.com/openstack/horizon/blob/2014.2.rc1/horizon/templates/horizon/common/_region_selector.html#L11

  There I am being immediately redirected to the
  settings.LOGIN_REDIRECT_URL because I am already authenticated at
  Region1, so I cannot view Region2 resources if I switch to it via top
  right dropdown. Selecting region at login page works though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1381413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474228] Re: inline edit failed in user table because description doesn't exists

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474228

Title:
  inline edit failed in user table because description doesn't exists

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  inline edit failed in user table because description doesn't exits

  Environment:
  ubuntu devstack stable/kilo

  horizon commit id: c2b543bb8f3adb465bb7e8b3774b3dd1d5d999f6
  keystone commit id: 8125a8913d233f3da0eaacd09aa8e0b794ea98cb

  $keystone --version
  
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/keystoneclient/shell.py:64:
 DeprecationWarning: The keystone CLI is deprecated in favor of 
python-openstackclient. For a Python library, continue using 
python-keystoneclient.
    'python-keystoneclient.', DeprecationWarning)
  1.6.0

  
  How to reproduce the bug:

  
  1. create a new user. (important)
  2. Try to edit user using inline edit.

  
  Note: 

  If you edit the user using inline edit and the user was edited by
  update user form ever, the exception will not raise because the update
  form set description to empty string.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/forms.py#L195

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/forms.py#L228

  
  Traceback:
  Internal Server Error: /identity/users/
  Traceback (most recent call last):
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/tables/views.py", line 224, in post
  return self.get(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/tables/views.py", line 160, in get
  handled = self.construct_tables()
    File "/home/user/github/horizon/horizon/tables/views.py", line 145, in 
construct_tables
  preempted = table.maybe_preempt()
    File "/home/user/github/horizon/horizon/tables/base.py", line 1533, in 
maybe_preempt
  new_row)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1585, in 
inline_edit_handle
  error = exceptions.handle(request, ignore=True)
    File "/home/user/github/horizon/horizon/exceptions.py", line 361, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1580, in 
inline_edit_handle
  cell_name)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1606, in 
inline_update_action
  self.request, datum, obj_id, cell_name, new_cell_value)
    File "/home/user/github/horizon/horizon/tables/actions.py", line 952, in 
action
  self.update_cell(request, datum, obj_id, cell_name, new_cell_value)
    File 
"/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py",
 line 210, in update_cell
  horizon_exceptions.handle(request, ignore=True)
    File "/home/user/github/horizon/horizon/exceptions.py", line 361, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File 
"/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py",
 line 200, in update_cell
  description=user_obj.description,
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/keystoneclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: description

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423453] Re: Delete ports when Launching VM fails when plugin is N1K

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1423453

Title:
  Delete ports when Launching VM fails when plugin is N1K

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  When plugin is Cisco N1KV, ports gets created before launching VM instance.
  But upon failure of launching, the ports are not cleaned up in the except 
block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1423453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467935] Re: widget attributes changed

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467935

Title:
  widget attributes changed

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
   In Django 1.8, widget attribute data-date-picker=True will be
  rendered as 'data-date-picker'. To preserve current behavior, use the
  string 'True' instead of the boolean value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465185] Re: No reverse match exception while try to edit the QoS spec

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1465185

Title:
  No reverse match exception while try to edit the QoS spec

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  while try to edit the QoS spec, i am getting NoReverseMatch Exception
  since the URL is wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1465185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1203413] Re: VM launch fails with Neutron in "admin" tenant if "admin" and "demo" tenants have secgroups with a same name "web"

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1203413

Title:
  VM launch fails with Neutron in "admin" tenant if "admin" and "demo"
  tenants have secgroups with a same name "web"

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Using Grizzly with Neutron: If there are multiple security groups with
  the same name (in other tenants for example), it is not possible to
  boot an instance with this security group as Horizon will only use the
  name of the security group.

  Example from logs:
  2013-07-21 03:39:12.432 ERROR nova.network.security_group.quantum_driver 
[req-aaca5681-72b8-41dc-a89c-9a5c95c7eff4 33fe423e114c4586a573514b3e98341e 
e91fe07ea4834f8487c5cec7deaa2eac] Quantum Error: Multiple security_group 
matches found for name 'web', use an ID to be more specific.
  2013-07-21 03:39:12.439 ERROR nova.api.openstack 
[req-aaca5681-72b8-41dc-a89c-9a5c95c7eff4 33fe423e114c4586a573514b3e98341e 
e91fe07ea4834f8487c5cec7deaa2eac] Caught error: Multiple security_group matches 
found for name 'web', use an ID to be more specific.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1203413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505786] [NEW] As neutron accepts icpmv6 but on api its not documented in accepted values for api call in security group create

2015-10-13 Thread Manjeet Singh Bhatia
Public bug reported:

http://developer.openstack.org/api-ref-networking-v2-ext.html

In security group create icmpv6 is missing  and

and pools does not show Terminated_HTTPS and expected values.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505786

Title:
  As neutron accepts icpmv6 but on api its not documented in accepted
  values for api call in security group create

Status in neutron:
  New

Bug description:
  http://developer.openstack.org/api-ref-networking-v2-ext.html

  In security group create icmpv6 is missing  and

  and pools does not show Terminated_HTTPS and expected values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474512] Re: STATIC_URL statically defined for stack graphics

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474512

Title:
  STATIC_URL statically defined for stack graphics

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  The svg and gif images are still using '/static/' as the base url.
  Since WEBROOT is configurable and STATIC_URL is as well. This is needs
  to be fixed or the images won't be found when either has been set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474618] Re: N1KV network and port creates failing from dashboard

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474618

Title:
  N1KV network and port creates failing from dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Due to the change in name of the "profile" attribute in Neutron
  attribute extensions for networks and ports, network and port
  creations fail from the dashboard since dashboard is still using
  "n1kv:profile_id" rather than "n1kv:profile".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474241] Re: Need a way to disable simple tenant usage

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474241

Title:
  Need a way to disable simple tenant usage

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Frequent calls to Nova's API when displaying the simple tenant usage
  can lead to efficiency problems and even crash on the Nova side,
  especially when there are a lot of deleted nodes in the database. We
  are working on resolving that, but in the mean time, it would be nice
  to have a way of disabling the simple tenant usage stats on the
  Horizon side as a workaround.

  Horizon enabled that option depending on whether it's supported on the
  Nova side. In the 2.0 version of API we can simply disable the support
  for it on the Nova side, but that won't be possible in version 2.1
  anymore, so we need a configuration option on the Horizon side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490403] Re: Gate failing on test_routerrule_detail

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490403

Title:
  Gate failing on test_routerrule_detail

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  The gate/jenkins checks is currently bombing out on this error:

  ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
  --
  Traceback (most recent call last):
File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, 
in instance_stub_out
  return fn(self, *args, **kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 711, in test_routerrule_detail
  res = self._get_detail(router)
File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, 
in instance_stub_out
  return fn(self, *args, **kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 49, in _get_detail
  args=[router.id]))
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 470, in get
  **extra)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 286, in get
  return self.generic('GET', path, secure=secure, **r)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 358, in generic
  return self.request(**r)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 440, in request
  six.reraise(*exc_info)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 146, in get
  context = self.get_context_data(**kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/views.py", 
line 140, in get_context_data
  context = super(DetailView, self).get_context_data(**kwargs)
File "/home/ubuntu/horizon/horizon/tables/views.py", line 107, in 
get_context_data
  context = super(MultiTableMixin, self).get_context_data(**kwargs)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 56, in 
get_context_data
  exceptions.handle(self.request)
File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 54, in 
get_context_data
  context["tab_group"].load_tab_data()
File "/home/ubuntu/horizon/horizon/tabs/base.py", line 128, in load_tab_data
  exceptions.handle(self.request)
File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/home/ubuntu/horizon/horizon/tabs/base.py", line 125, in load_tab_data
  tab._data = tab.get_context_data(self.request)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 82, in get_context_data
  data["rulesmatrix"] = self.get_routerrulesgrid_data(rules)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 127, in get_routerrulesgrid_data
  source, target, rules))
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 159, in _get_subnet_connectivity
  if (int(dst.network) >= int(rd.broadcast) or
  TypeError: int() argument must be a string or a number, not 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo

[Yahoo-eng-team] [Bug 1482657] Re: Attribute error on virtual_size

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482657

Title:
  Attribute error on virtual_size

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Version: stable/kilo
  Run with ./run_test.py --runserver

  Running an old havana glance backend will result in an AttributeError
  since the attribute is introduced with the icehouse release. See error
  log at bottom of message. A simple check for the attribute will solve
  this issue and restore compatibility.

  Attached is a patch as proposal.

  Regards
  Christoph

  
  Error log:

  Internal Server Error: /project/instances/launch
  Traceback (most recent call last):
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 137, in get_response
  response = response.render()
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/response.py",
 line 103, in render
  self.content = self.rendered_content
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/response.py",
 line 80, in rendered_content
  content = template.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 148, in render
  return self._render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 142, in _render
  return self.nodelist.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 844, in render
  bit = self.render_node(node, context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/debug.py",
 line 80, in render_node
  return node.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/defaulttags.py",
 line 525, in render
  six.iteritems(self.extra_context))
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/defaulttags.py",
 line 524, in 
  values = dict((key, val.resolve(context)) for key, val in
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 596, in resolve
  obj = self.var.resolve(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 734, in resolve
  value = self._resolve_lookup(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 788, in _resolve_lookup
  current = current()
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 717, in 
get_entry_point
  step._verify_contributions(self.context)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 392, in 
_verify_contributions
  field = self.action.fields.get(key, None)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 368, in action
  context)
File 
"/home/coby/ao/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 147, in __init__
  request, context, *args, **kwargs)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 138, in 
__init__
  self._populate_choices(request, context)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 151, in 
_populate_choices
  bound_field.choices = meth(request, context)
File 
"/home/coby/ao/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 428, in populate_image_id_choices
  if image.virtual_size:
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: virtual_size

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482354] Re: Setting "enable_quotas"=False disables Neutron in GUI

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482354

Title:
  Setting "enable_quotas"=False disables Neutron in  GUI

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Excluding OPENSTACK_NEUTRON_NETWORK["enable_quotas"] or setting to False
  will result in Create Network, Create Subnet, Create Router buttons
  not showing up when logged in as the demo account. KeyError Exceptions are 
  thrown.

  These three side effects happen because the code in the views use the
  following construct
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/networks/tables.py#L94

  usages = quotas.tenant_quota_usages(request)
  if usages['networks']['available'] <= 0:

  if enable_quotas is false, then quotas.tenant_quota_usages does not
  add the 'available' node to the usages dict and therefore a KeyError
  'available' is thrown. This ends up aborting the whole is_allowed
  method in horizon.BaseTable and therefore hiding the button.

  quotas.tenant_quota_usages will not add the available key for usage
  items which are disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500385] Re: Change region selector requires 2 ciicks to open

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500385

Title:
  Change region selector requires 2 ciicks to open

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  This behavior is true only for stable/kilo branch and is not seen in
  liberty release due to a different codebase.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469563] Re: Fernet tokens do not maintain expires time across rescope (V2 tokens)

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469563

Title:
  Fernet tokens do not maintain expires time across rescope (V2 tokens)

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in Keystone liberty series:
  Fix Released

Bug description:
  Fernet tokens do not maintain the expiration time when rescoping
  tokens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1469563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492065] Re: Create instance testcase -- "test_launch_form_keystone_exception" broken

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492065

Title:
  Create instance testcase -- "test_launch_form_keystone_exception"
  broken

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  The test_launch_form_keystone_exception test method calls the handle
  method of the LaunchInstance class. Changes made to the handle method
  in [1] introduced a new neutron api call that was not being mocked
  out, causing an unexpected exception in the
  _cleanup_ports_on_failed_vm_launch function of the create_instance
  module, while running the test_launch_form_keystone_exception unit
  test

  [1] https://review.openstack.org/#/c/202347/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483382] Re: Able to request a V2 token for user and project in a non-default domain

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1483382

Title:
  Able to request a V2 token for user and project in a non-default
  domain

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Using the latest devstack, I am able to request a V2 token for user
  and project in a non-default domain. This problematic as non-default
  domains are not suppose to be visible to V2 APIs.

  Steps to reproduce:

  1) install devstack

  2) run these commands

  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 domain list
  
+--+-+-+--+
  | ID   | Name| Enabled | Description  
|
  
+--+-+-+--+
  | 769ad7730e0c4498b628aa8dc00e831f | foo | True|  
|
  | default  | Default | True| Owns users and 
tenants (i.e. projects) available on Identity API v2. |
  
+--+-+-+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 user list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+--+
  | ID   | Name |
  +--+--+
  | cf0aa0b2d5db4d67a94d1df234c338e5 | bar  |
  +--+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 project list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+-+
  | ID   | Name|
  +--+-+
  | 413abdbfef5544e2a5f3e8ac6124dd29 | foo-project |
  +--+-+
  gyee@dev:~$ curl -k -H 'Content-Type: application/json' -d '{"auth": 
{"passwordCredentials": {"userId": "cf0aa0b2d5db4d67a94d1df234c338e5", 
"password": "secrete"}, "tenantId": "413abdbfef5544e2a5f3e8ac6124dd29"}}' 
-XPOST http://localhost:35357/v2.0/tokens | python -mjson.tool
    % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100  3006  100  2854  100   152  22164   1180 --:--:-- --:--:-- --:--:-- 22472
  {
  "access": {
  "metadata": {
  "is_admin": 0,
  "roles": [
  "2b7f29ebd1c8453fb91e9cd7c2e1319b",
  "9fe2ff9ee4384b1894a90878d3e92bab"
  ]
  },
  "serviceCatalog": [
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "id": "3a92a79a21fb41379fa3e135be65eeff",
  "internalURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "publicURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "nova",
  "type": "compute"
  },
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "id": "64338d9eb3054598bcee30443c678e2a",
  "internalURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "publicURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "cinderv2",
  "type": "volumev2"
  },
  {
  "endpoints": [
  {
   

[Yahoo-eng-team] [Bug 1475762] Re: v3 tokens with references outside the default domain can be validated on v2

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1475762

Title:
  v3 tokens with references outside the default domain can be validated
  on v2

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  v2 has no knowledge of multiple domains, so all ID references it sees
  must exist inside the default domain.

  So, a v3 token being validated on v2 must have a project-scope in the
  default domain, a user identity in the default domain, and obviously
  must not be a domain-scoped token.

  The current implementation of Fernet blindly returns tokens to the v2
  API with (at least) project references that exist outside the default
  domain (I have not tested user references). The consequence is that v2
  clients may end up with naming collisions (due to lack of domain
  namespacing).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1475762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477600] Re: Token Validation API returns 401 not 404 on invalid fernet token

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1477600

Title:
  Token Validation API returns 401 not 404 on invalid fernet token

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  Validate token API specifies 404 response for invalid Subject tokens:
   * 
http://developer.openstack.org/api-ref-identity-admin-v2.html#admin-validateToken
   * http://developer.openstack.org/api-ref-identity-v3.html#validateTokens 
(not clear, but KSC auth middleware has the same logic as v2.0)

  For Fernet tokens, this API returns 401 for invalid token:

  curl -H 'X-Auth-Token: valid' -H 'X-Subject-Token: invalid' 
localhost:5000/v3/auth/tokens
  {"error": {"message": "The request you have made requires authentication. 
(Disable debug mode to suppress these details.)", "code": 401, "title": 
"Unauthorized"}}

  I've check the tests and found incorrect one. API spec requires 404,
  test check for 401
  
https://github.com/openstack/keystone/blob/master/keystone/tests/unit/token/test_fernet_provider.py#L51

  Looks like it's broken in one of this places:
   * Controller doesn't check the return 
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L448
   * Fernet token's core doesn't check the return here 
https://github.com/openstack/keystone/blob/master/keystone/token/providers/fernet/core.py#L152
   * Fernet token goes raises 401 here 
https://github.com/openstack/keystone/blob/master/keystone/token/providers/fernet/token_formatters.py#L201

  Note that UUID token raises 404 here as expected
  
https://github.com/openstack/keystone/blob/master/keystone/token/providers/common.py#L679

  Also, note that KSC auth middleware https://github.com/openstack
  /python-
  keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L1147
  we're expect 404 for invalid USER token, and and 401 for invalid ADMIN
  token. So 401 for invalid user token makes middleware go for new admin
  token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1477600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468000] Re: Group lookup by name in LDAP via v3 fails

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1468000

Title:
  Group lookup by name in LDAP via v3 fails

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  This bug is similar to
  https://bugs.launchpad.net/keystone/+bug/1454309 but relates to
  groups. When issuing an "openstack group show " command on
  a domain associated with LDAP, invalid LDAP query is composed and
  Keystone returns ISE 500:

  $ openstack --os-token ADMIN --os-url http://localhost:35357/v3 
--os-identity-api-version 3 group show --domain ad 'Domain Admins'
  ERROR: openstack An unexpected error prevented the server from fulfilling 
your request: {'desc': 'Bad search filter'} (Disable debug mode to suppress 
these details.) (HTTP 500) (Request-ID: 
req-06fd5907-6ade-4872-95ab-e66f0809986a)

  Here's the log:

  2015-06-23 15:59:41.627 8571 DEBUG keystone.common.ldap.core [-] LDAP search: 
base=CN=Users,DC=dept,DC=example,DC=org scope=2 
filterstr=(&(&None(sAMAccountName=Domain Admins))(objectClass=group)) 
attrs=['cn', 'sAMAccountName', 'description'] attrsonly=0 search_s 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py:933
  2015-06-23 15:59:41.628 8571 DEBUG keystone.common.ldap.core [-] LDAP unbind 
unbind_s 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py:906
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi [-] {'desc': 'Bad 
search filter'}
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 240, in __call__
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi result = 
method(context, **params)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/controller.py",
 line 202, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
context, filters, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/controllers.py",
 line 310, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi hints=hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/manager.py",
 line 54, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py",
 line 342, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py",
 line 353, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py",
 line 1003, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi ref_list = 
driver.list_groups(hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/backends/ldap.py",
 line 164, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return 
self.group.get_all_filtered(hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/backends/ldap.py",
 line 402, in get_all_filtered
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi for group in 
self.get_all(query)]
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py",
 line 1507, in get_all
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi for x in 
self._ldap_get_all(ldap_filter)]
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py",
 line 1469, in _ldap_get_all
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi attrs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py",
 line 946, in search_s
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi attrlist_utf8, 
attrsonl

[Yahoo-eng-team] [Bug 1459382] Re: Fernet tokens can fail with LDAP identity backends

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459382

Title:
  Fernet tokens can fail with LDAP identity backends

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  It is possible for Keystone to fail to issue tokens when using an
  external identity backend, like LDAP, if the user IDs of a different
  format than UUID. This is because the Fernet token formatter attempts
  to convert the UUID to bytes before packing the payload. This is done
  to save space and results in a shorter token.

  When using an LDAP backend that doesn't use UUID format for the user
  IDs, we get a ValueError because UUID can't convert whenever the ID is
  to UUID.bytes [0]. We have to do something similar with the default
  domain in the case that it's not a uuid, same with federated user IDs
  [1], which we should probably do in this case.

  Related stacktrace [2].

  
  [0] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L415
  [1] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L509
  [2] http://lists.openstack.org/pipermail/openstack/2015-May/012885.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454968] Re: hard to understand the uri printed in the log

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454968

Title:
  hard to understand the uri printed in the log

Status in Keystone:
  Fix Released
Status in Keystone juno series:
  In Progress
Status in Keystone kilo series:
  Fix Released

Bug description:
  In keystone's log file, we can easily find some uri printed like this:
  
http://127.0.0.1:35357/v3/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens

  seems there is something wrong when we are trying to log the uri in the log 
file.
  LOG.info('%(req_method)s %(uri)s', {
  'req_method': req.environ['REQUEST_METHOD'].upper(),
  'uri': wsgiref.util.request_uri(req.environ),
  })

  code is here:
  
https://github.com/openstack/keystone/blob/0debc2fbf448b44574da6f3fef7d457037c59072/keystone/common/wsgi.py#L232
  but seems it has already been wrong when the req is passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1454968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459790] Re: With fernet tokens, validate token loses the ms on 'expires' value

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459790

Title:
  With fernet tokens, validate token loses the ms on 'expires' value

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  With fernet tokens, the expires ms value is 0 when the token is
  validated.  So the 'expires' on the post token and the get token are
  different; this is not the case with uuid tokens.

  $ curl -s \
   -H "Content-Type: application/json" \
   -d '{ "auth":{ "tenantName":"testTenantName", "passwordCredentials":{ 
"username":"testUserName", "password":"password" }}}' \
  -X POST $KEYSTONE_ENDPOINT:5000/v2.0/tokens | python -mjson.tool

  post token portion of the response contains 'expires' with a ms value
  :

  "token": {
  "audit_ids": [
  "eZtfF60tR7y5oAuL4LSr4w"
  ],
  "expires": "2015-05-28T20:50:56.015102Z",
  "id": 
"gABVZ2OQu3OunvR6FKklDdNWj95Aq-ju_sIhB9o0KRin2SpLRUa0C3H_XiV_RWN409Ma-Q7lIkA_S6mY3bnxgboJZ_qxUiTdzUscG5y_fSCUW5sQqmB2AI1rlmMetvTl6AnnRKzVHVlJlDKQNHuk0MzHM3IVr4-ysJ2AHBtmDfkdpRZCrFo%3D",
  "issued_at": "2015-05-28T18:50:56.015211Z",
  "tenant": {
  "description": "Test tenant ...",
  "enabled": true,
  "id": "1c6e0d2ac4bf4cd5bc7666d86b28aee0",
  "name": "testTenantName",
  "parent_id": null
  }
  },

  If this token is validated, the expires ms now show as 00Z

  $ curl -s \
   -H "Content-Type: application/json" \
   -H "X-Auth-Token: $ADMIN_TOKEN" \
  -X GET   $KEYSTONE_ENDPOINT:35357/v2.0/tokens/$USER_TOKEN | python -mjson.tool

  get token portion of the response contains 'expires' with ms = 00Z

  ],
  "token": {
  "audit_ids": [
  "lZwaM7oaShCZGQt0A9FaKA"
  ],
  "expires": "2015-05-28T20:27:24.00Z",
  "id": 
"gABVZ14MKoaOBq4WBHaF1fqEKrN_nTrYYhwi8xrAisWmyJ52DJOrVlyxAoUuL_tfrGhslYVffRTosF5FqQVYlNq6hqU-qGzhueC4xVJZL8oitv0PfOdGfLgAWM1pciuiIdDLnWb-6oNrgZ9l1lHqn1kyuO0JVmS_YJfYI4YOt0o7ZfJhzFQ=",
  "issued_at": "2015-05-28T18:27:24.00Z",
  "tenant": {
  "description": "Test tenant ...",
  "enabled": true,
  "id": "1c6e0d2ac4bf4cd5bc7666d86b28aee0",
  "name": "testTenantName",
  "parent_id": null
  }
  },

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454309] Re: Keystone v3 user/tenant lookup by name via OpenStack CLI client fails

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454309

Title:
  Keystone v3 user/tenant lookup by name via OpenStack CLI client fails

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  When using the openstack CLI client to look up users/tenants by name
  (e.g., openstack user show admin or openstack openstack project show
  AdminTenant), it fails with a 500 and the following traceback:

  2015-05-12 09:27:22.483530 2015-05-12 09:27:22.483 31012 DEBUG 
keystone.common.ldap.core [-] LDAP search: base=ou=People,dc=local,dc=lan 
scope=2 filterstr=(&(&None(sn=admin))(objectClass=inetOrgPerson)) attrs=['cn', 
'userPassword', 'enabled', 'sn', 'mail'] attrsonly=0 search_s 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py:931
  2015-05-12 09:27:22.483677 2015-05-12 09:27:22.483 31012 DEBUG 
keystone.common.ldap.core [-] LDAP unbind unbind_s 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py:904
  2015-05-12 09:27:22.485831 2015-05-12 09:27:22.483 31012 ERROR 
keystone.common.wsgi [-] {'desc': 'Bad search filter'}
  2015-05-12 09:27:22.485874 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi Traceback (most recent call last):
  2015-05-12 09:27:22.485881 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 239, in 
__call__
  2015-05-12 09:27:22.485885 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi result = method(context, **params)
  2015-05-12 09:27:22.485897 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 202, in 
wrapper
  2015-05-12 09:27:22.485901 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, context, filters, **kwargs)
  2015-05-12 09:27:22.485904 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/controllers.py", line 223, 
in list_users
  2015-05-12 09:27:22.485908 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi hints=hints)
  2015-05-12 09:27:22.485911 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 52, in 
wrapper
  2015-05-12 09:27:22.485915 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, *args, **kwargs)
  2015-05-12 09:27:22.485919 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 342, in 
wrapper
  2015-05-12 09:27:22.485922 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, *args, **kwargs)
  2015-05-12 09:27:22.485926 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 353, in 
wrapper
  2015-05-12 09:27:22.485930 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, *args, **kwargs)
  2015-05-12 09:27:22.485933 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 791, in 
list_users
  2015-05-12 09:27:22.485937 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi ref_list = driver.list_users(hints)
  2015-05-12 09:27:22.485941 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py", line 82, 
in list_users
  2015-05-12 09:27:22.485944 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return self.user.get_all_filtered(hints)
  2015-05-12 09:27:22.485948 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py", line 
269, in get_all_filtered
  2015-05-12 09:27:22.485951 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return [self.filter_attributes(user) for user in 
self.get_all(query)]
  2015-05-12 09:27:22.485964 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py", line 1863, in 
get_all
  2015-05-12 09:27:22.485968 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi for x in self._ldap_get_all(ldap_filter)
  2015-05-12 09:27:22.485972 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py", line 1467, in 
_ldap_get_all
  2015-05-12 09:27:22.485975 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi attrs)
  2015-05-12 09:27:22.485979 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py", line 944, in 
search_

[Yahoo-eng-team] [Bug 1465444] Re: Fernet key rotation removing keys early

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1465444

Title:
  Fernet key rotation removing keys early

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  When setting up Fernet key rotation with a maximum number of active of
  keys set to 25, it turned out that 'keystone-manage fernet_rotate'
  started deleting two keys once there reached 13 existing keys. It
  would waver between 12 and 13 keys every time it was rotated. It looks
  like this might be related to the range of keys to remove being
  negative :

  excess_keys = ( keys[:len(key_files) - CONF.fernet_tokens.max_active_keys + 
1])
  .. ends up being excess_keys = ( keys[:-11] )
  .. which seems to be dipping back into the range of keys that should still be 
good and removing those.

  Adding something like: "if len(key_files) -
  CONF.fernet_tokens.max_active_keys + 1 >= 0" for the purge excess keys
  section seemed to allow us to generate all 25 keys, then rotate as
  normal. Once we hit the full 25 keys, this additional line was no
  longer needed.

  Attaching some log information showing the available keys going from
  12, 13, 12, 13.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1465444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287757] Re: Optimization: Don't prune events on every get

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1287757

Title:
  Optimization:  Don't prune events on every get

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in Keystone liberty series:
  Fix Released

Bug description:
  _prune_expired_events_and_get always locks the backend. Store the time
  of the oldest event so that the prune process can be skipped if none
  of the events have timed out.

  (decided at keystone midcycle - 2015/07/17) -- MorganFainberg
  The easiest solution is to do the prune on issuance of new revocation event 
instead on the get.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1287757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448286] Re: unicode query string raises UnicodeEncodeError

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1448286

Title:
  unicode query string raises UnicodeEncodeError

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  The logging in keystone.common.wsgi is unable to handle unicode query
  strings. The simplest example would be:

$ curl http://localhost:35357/?Ϡ

  This will fail with a backtrace similar to:

2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi   File 
".../keystone/keystone/common/wsgi.py", line 234, in __call__
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi 'params': 
urllib.urlencode(req.params)})
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/urllib.py", line 1311, in urlencode
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi k = 
quote_plus(str(k))
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u03e0' in position 
0: ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1448286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479943] Re: XmlBodyMiddleware stubs break existing configs

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1479943

Title:
  XmlBodyMiddleware stubs break existing configs

Status in Keystone:
  Invalid
Status in Keystone kilo series:
  Fix Released

Bug description:
  The Kilo Keystone release dropped support for requests with XML
  bodies, but included shims to (presumably) prevent existing configs
  from breaking. This works as desired for XmlBodyMiddleware, but not
  XmlBodyMiddlewareV2 and XmlBodyMiddlewareV3. As a result, all client
  requests to a pipeline with either of those filters will receive a 500
  response and the server's logs look like

  2015-07-30 19:06:57.029 22048 DEBUG keystone.middleware.core [-] RBAC: 
auth_context: {} process_request 
/vagrant/swift3/.tox/keystone/local/lib/python2.7/site-packages/keystone/middleware/core.py:239
  2015-07-30 19:06:57.029 22048 ERROR keystone.common.wsgi [-] 
'XmlBodyMiddlewareV2' object has no attribute 'application'
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi   File 
"/vagrant/swift3/.tox/keystone/local/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 452, in __call__
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi response = 
request.get_response(self.application)
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi AttributeError: 
'XmlBodyMiddlewareV2' object has no attribute 'application'
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi
  2015-07-30 19:06:57.055 22048 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[30/Jul/2015 19:06:57] "GET /v2.0/tenants HTTP/1.1" 500 423 0.027812

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1479943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453264] Re: iptables_manager can run very slowly when a large number of security group rules are present

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453264

Title:
  iptables_manager can run very slowly when a large number of security
  group rules are present

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  We have customers that typically add a few hundred security group
  rules or more.  We also typically run 30+ VMs per compute node.  When
  about 10+ VMs with a large SG set all get scheduled to the same node,
  the L2 agent (OVS) can spend many minutes in the
  iptables_manager.apply() code, so much so that by the time all the
  rules are updated, the VM has already tried DHCP and failed, leaving
  it in an unusable state.

  While there have been some patches that tried to address this in Juno
  and Kilo, they've either not helped as much as necessary, or broken
  SGs completely due to re-ordering the of the iptables rules.

  I've been able to show some pretty bad scaling with just a handful of
  VMs running in devstack based on today's code (May 8th, 2015) from
  upstream Openstack.

  Here's what I tested:

  1. I created a security group with 1000 TCP port rules (you could
  alternately have a smaller number of rules and more VMs, but it's
  quicker this way)

  2. I booted VMs, specifying both the default and "large" SGs, and
  timed from the second it took Neutron to "learn" about the port until
  it completed it's work

  3. I got a :( pretty quickly

  And here's some data:

  1-3 VM - didn't time, less than 20 seconds
  4th VM - 0:36
  5th VM - 0:53
  6th VM - 1:11
  7th VM - 1:25
  8th VM - 1:48
  9th VM - 2:14

  While it's busy adding the rules, the OVS agent is consuming pretty
  close to 100% of a CPU for most of this time (from top):

PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND   
  
  25767 stack 20   0  157936  76572   4416 R  89.2  0.5  50:14.28 python

  And this is with only ~10K rules at this point!  When we start
  crossing the 20K point VM boot failures start to happen.

  I'm filing this bug since we need to take a closer look at this in
  Liberty and fix it, it's been this way since Havana and needs some
  TLC.

  I've attached a simple script I've used to recreate this, and will
  start taking a look at options here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471967] Re: Fernet unit tests do not test persistence logic

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1471967

Title:
  Fernet unit tests do not test persistence logic

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  There are some unit tests for the Fernet token provider that live
  outside of the functional-like tests (test_v3_auth.py, for example)
  [0]. These tests should include a test to assert that the Fernet token
  provider returns False when asked if it's tokens need persistence [1].


  [0] 
https://github.com/openstack/keystone/blob/992d9ecbf4f563c42848147d4d66f8ec8efd4df0/keystone/tests/unit/token/test_fernet_provider.py
  [1] 
https://github.com/openstack/keystone/blob/992d9ecbf4f563c42848147d4d66f8ec8efd4df0/keystone/token/providers/fernet/core.py#L36-L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1471967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >