[Yahoo-eng-team] [Bug 1445026] Re: glance-manage db load_metadefs does not load tags correctly

2015-04-23 Thread Thierry Carrez
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
Milestone: None = kilo-rc2

** Changed in: glance/kilo
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1445026

Title:
  glance-manage db load_metadefs does not load tags correctly

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance kilo series:
  New

Bug description:
  Script which populates DB with metadefs does not load tags correctly.
  It looks for ID in .json file while it should look for name of a tag.
  In result user can't load tags to database without providing
  unnecessary ID in .json file. It also may lead to conflicts in DB and
  unhandled exceptions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1445026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447463] Re: glance.tests.functional.v2.test_images.TestImages.test_download_random_access failed

2015-04-23 Thread Thierry Carrez
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
Milestone: None = kilo-rc2

** Changed in: glance/kilo
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447463

Title:
  glance.tests.functional.v2.test_images.TestImages.test_download_random_access
  failed

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in Glance kilo series:
  New

Bug description:
  The error message is below.

  Traceback (most recent call last):
File tools/colorizer.py, line 326, in module
  if runner.run(test).wasSuccessful():
File /usr/lib/python2.7/unittest/runner.py, line 158, in run
  result.printErrors()
File tools/colorizer.py, line 305, in printErrors
  self.printErrorList('FAIL', self.failures)
File tools/colorizer.py, line 315, in printErrorList
  self.stream.writeln(%s % err)
File /usr/lib/python2.7/unittest/runner.py, line 24, in writeln
  self.write(arg)
  UnicodeEncodeError: 'ascii' codec can't encode characters in position 
600-602: ordinal not in range(128)

  There is get method from glance server.

  response = requests.get(path, headers=headers)

  The type of text in this response is unicode, which is
  '\x1f\x8b\x08\x00\x00\x00\x00\x00\x02\xff\x8b\x02\x00gW\xbcY\x01\x00\x00\x00'

  ascii codec can't encode this unicode type.

  This issue is also related other unit test like test_image_life_cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-04-23 Thread Sean Dague
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Committed
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo - master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434578] Re: Inefficient db call while doing a image_get with image_id.

2015-04-23 Thread Thierry Carrez
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
Milestone: None = kilo-rc2

** Changed in: glance/kilo
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1434578

Title:
  Inefficient db call while doing a image_get with image_id.

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  New

Bug description:
  I am running MySql as backend and I can see two queries being done
  when I try to get an image.

  In first query the tables images, image_locations and image_properties
  are joined and in another query tags related to that image are fetched
  from the db. These two can be combined to one query which will do a
  join of images, image_locations, image_properties and image_tags (join
  of these four tables happen when a call is made to get a list of all
  images).

  Reference:
  https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L69-74

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1434578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447642] [NEW] Ironic: Driver should validate the node's properties

2015-04-23 Thread Lucas Alvares Gomes
Public bug reported:

The Ironic nova driver will look at the Ironic node to fetch the amount
of CPUs, memory, disk etc... But it doesn't validate any of this
properties making it fail horribly if the node is misconfigured. It
should be more resilient to failures.

Traceback:

Apr 21 11:11:33 nova-compute[32119]: Traceback (most recent call last):
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/eventlet/queue.py, line 117, in switch
Apr 21 11:11:33 nova-compute[32119]: self.greenlet.switch(value)
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 214, in main
Apr 21 11:11:33 nova-compute[32119]: result = function(*args, **kwargs)
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/openstack/common/service.py, line 497, 
in run_service
Apr 21 11:11:33 nova-compute[32119]: service.start()
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/service.py, line 183, in start
Apr 21 11:11:33 nova-compute[32119]: self.manager.pre_start_hook()
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1250, in 
pre_start_hook
Apr 21 11:11:33 nova-compute[32119]: 
self.update_available_resource(nova.context.get_admin_context())
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 6197, in 
update_available_resource
Apr 21 11:11:33 nova-compute[32119]: rt.update_available_resource(context)
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 376, 
in update_available_resource
Apr 21 11:11:33 nova-compute[32119]: resources = 
self.driver.get_available_resource(self.nodename)
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py, line 535, in 
get_available_resource
Apr 21 11:11:33 nova-compute[32119]: return self._node_resource(node)
Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py, line 228, in 
_node_resource
Apr 21 11:11:33 nova-compute[32119]: vcpus = int(node.properties.get('cpus', 0))
Apr 21 11:11:33 nova-compute[32119]: ValueError: invalid literal for int() with 
base 10: 'None'

++-+
| Property   | Value
   |
++-+
...
| properties | {u'memory_mb': u'None', u'cpu_arch': None, 
u'local_gb': u'None',|
|| u'cpus': u'None', u'capabilities': 
u'boot_option:local'}|
...

** Affects: nova
 Importance: Undecided
 Assignee: Lucas Alvares Gomes (lucasagomes)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Lucas Alvares Gomes (lucasagomes)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447642

Title:
  Ironic: Driver should validate the node's properties

Status in OpenStack Compute (Nova):
  New

Bug description:
  The Ironic nova driver will look at the Ironic node to fetch the
  amount of CPUs, memory, disk etc... But it doesn't validate any of
  this properties making it fail horribly if the node is misconfigured.
  It should be more resilient to failures.

  Traceback:

  Apr 21 11:11:33 nova-compute[32119]: Traceback (most recent call last):
  Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/eventlet/queue.py, line 117, in switch
  Apr 21 11:11:33 nova-compute[32119]: self.greenlet.switch(value)
  Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 214, in main
  Apr 21 11:11:33 nova-compute[32119]: result = function(*args, **kwargs)
  Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/openstack/common/service.py, line 497, 
in run_service
  Apr 21 11:11:33 nova-compute[32119]: service.start()
  Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/service.py, line 183, in start
  Apr 21 11:11:33 nova-compute[32119]: self.manager.pre_start_hook()
  Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1250, in 
pre_start_hook
  Apr 21 11:11:33 nova-compute[32119]: 
self.update_available_resource(nova.context.get_admin_context())
  Apr 21 11:11:33 nova-compute[32119]: File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 6197, in 
update_available_resource
  Apr 21 11:11:33 nova-compute[32119]: rt.update_available_resource(context)
  Apr 21 11:11:33 nova-compute[32119]: File 

[Yahoo-eng-team] [Bug 1276639] Re: block live migration does not work when a volume is attached

2015-04-23 Thread Timofey Durakov
According to http://lists.openstack.org/pipermail/openstack-
dev/2014-June/038152.html nova should not allow block-migrate instances
with attached volumes until libvirt functionality would be extended.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276639

Title:
  block live migration does not work when a volume is attached

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Environment:
   - Two compute nodes, running Ubuntu 12.04 LTS
   - KVM Hypervisor
   - Ceph (dumpling) back-end for Cinder
   - Grizzly-level Openstack

  Steps to reproduce:
   1) Create instance and volume
   2) Attach volume to instance
   3) Attempt a block migration between compute nodes - eg: nova live-migration 
--block-migrate 9b85b983-dced-4574-b14c-c72e4d92982a

  Packages:
  ii  ceph 0.67.5-1precise
  ii  ceph-common  0.67.5-1precise
  ii  ceph-fs-common   0.67.5-1precise
  ii  ceph-fuse0.67.5-1precise
  ii  ceph-mds 0.67.5-1precise
  ii  curl 7.29.0-1precise.ceph
  ii  kvm  
1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu14.13
  ii  kvm-ipxe 1.0.0+git-3.55f6c88-0ubuntu1
  ii  libcephfs1   0.67.5-1precise
  ii  libcurl3 7.29.0-1precise.ceph
  ii  libcurl3-gnutls  7.29.0-1precise.ceph
  ii  libleveldb1  1.12.0-1precise.ceph
  ii  nova-common  1:2013.1.4-0ubuntu1~cloud0
  ii  nova-compute 1:2013.1.4-0ubuntu1~cloud0
  ii  nova-compute-kvm 1:2013.1.4-0ubuntu1~cloud0
  ii  python-ceph  0.67.5-1precise
  ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
  ii  python-nova  1:2013.1.4-0ubuntu1~cloud0
  ii  python-novaclient1:2.13.0-0ubuntu1~cloud0
  ii  qemu-common  1.0+noroms-0ubuntu14.13
  ii  qemu-kvm 1.0+noroms-0ubuntu14.13
  ii  qemu-utils   1.0+noroms-0ubuntu14.13
  ii  libvirt-bin  1.0.2-0ubuntu11.13.04.5~cloud1
  ii  libvirt0 1.0.2-0ubuntu11.13.04.5~cloud1
  ii  python-libvirt   1.0.2-0ubuntu11.13.04.5~cloud1

  /var/log/nova/nova-compute on source:

  2014-02-05 16:36:46.014 998 INFO nova.compute.manager [-] Lifecycle event 2 
on VM 9b85b983-dced-4574-b14c-c72e4d92982a
  2014-02-05 16:36:46.233 998 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has 
a pending task. Skip.
  2014-02-05 16:36:46.234 998 INFO nova.compute.manager [-] Lifecycle event 2 
on VM 9b85b983-dced-4574-b14c-c72e4d92982a
  2014-02-05 16:36:46.468 998 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has 
a pending task. Skip.
  2014-02-05 16:41:09.029 998 INFO nova.compute.manager [-] Lifecycle event 1 
on VM 9b85b983-dced-4574-b14c-c72e4d92982a
  2014-02-05 16:41:09.265 998 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has 
a pending task. Skip.
  2014-02-05 16:41:09.640 998 ERROR nova.virt.libvirt.driver [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] Live Migration failure: Unable to read 
from monitor: Connection reset by peer
  2014-02-05 16:41:12.165 998 WARNING nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] Instance shutdown by itself. Calling the 
stop API.
  2014-02-05 16:41:12.398 998 INFO nova.virt.libvirt.driver [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] Instance destroyed successfully.

  /var/log/libvirt/libvirtd.log on source:

  2014-02-05 14:41:07.607+: 3437: error : qemuMonitorIORead:502 : Unable to 
read from monitor: Connection reset by peer
  2014-02-05 14:41:09.633+: 3441: error : 
virNetClientProgramDispatchError:175 : An error occurred, but the cause is 
unknown
  2014-02-05 14:41:09.634+: 3441: error : 
qemuDomainObjEnterMonitorInternal:997 : operation failed: domain is no longer 
running
  2014-02-05 14:41:09.634+: 3441: warning : doPeer2PeerMigrate3:2872 : 
Guest instance-0315 probably left in 'paused' state on source

  /var/log/nova/nova-compute.log on target:

  2014-02-05 16:36:38.841 INFO nova.virt.libvirt.driver 
[req-0f0eaabf-9e29-4d45-88c9-20194be51d49 aaf3e92b69e04958b43348677ab7b38b 
1859d80f51ff4180b591f7fe2668fd68] Instance launched has CPU info:
  {vendor: Intel, model: SandyBridge, arch: x86_64, features: 
[pdpe1gb, osxsave, dca, pcid, pdcm, xtpr, tm2, est, smx, 
vmx, ds_cpl, monitor, dtes64, pbe, tm, ht, ss, acpi, ds, 
vme], topology: {cores: 6, 

[Yahoo-eng-team] [Bug 1304099] Re: link prefixes are truncated

2015-04-23 Thread Deliang Fan
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) = Deliang Fan (vanderliang)

** Changed in: cinder
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304099

Title:
  link prefixes are truncated

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The osapi_glance_link_prefix and osapi_compute_link_prefix
  configuration parameters have their paths removed. For instance, if
  nova.conf contains

  osapi_compute_link_prefix = http:/127.0.0.1/compute/

  the values displayed in the API response exclude the compute/
  component. Other services, such as keystone, retain the path.

  This bit of code is where the bug occurs:

  
https://github.com/openstack/nova/blob/673ecaea3935b6a50294f24f8a964590ca07a959/nova/api/openstack/common.py#L568-L582

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1304099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447327] Re: Glance v1 should now be in SUPPORTED status

2015-04-23 Thread Thierry Carrez
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
Milestone: None = kilo-rc2

** Changed in: glance/kilo
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447327

Title:
  Glance v1 should now be in SUPPORTED status

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  New

Bug description:
  As per title

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303690] Re: nova live-migration slow when using volumes

2015-04-23 Thread Timofey Durakov
moved bug to 'Invalid' state. 
Here is bug https://bugs.launchpad.net/nova/+bug/1398999 the closes this 
functionality in nova.
Patch from it proposed to stable/juno

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303690

Title:
  nova live-migration slow when using volumes

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have block live migration configured in my environment (no shared storage) 
and it is very fast for instances which don't use volumes. An instance with 
2.5G disk image takes ~40 seconds to migrate to different host.
  When I migrate instances which do use ceph backed volumes they take much 
longer and it depends on the volume size. For example migration of an instance 
with 1G volume takes around 1 minute, 10G ~8 minutes and with 50G I had to wait 
nearly 50 minutes for the process to complete. It completes without errors 
every time, it is just very slow.

  I was looking at the network traffic during migration and it looks a
  bit strange. Lets say I am migrating an instance with 50B volume from
  compute node A to compute node B and ceph is running on hosts X,Y and
  Z.

  I initiate live migration and as expected there is lots of traffic going from 
host A to B, this lasts less than 1 minute (disk image transfer). Then traffic 
from A to B goes down to ~200Mbit/s and stays at this level until migration is 
completed.
  After initial traffic burst between host A and B host B starts sending data 
to the ceph nodes X,Y and Z. I can see between 40 to 80Mbit/s of going from 
host B to each of the ceph nodes. This continues for ~50 minutes, then 
migration completes and networks traffic idles.

  Every time I tried migration eventually completed fine but for
  instances with lets say 200G volume it could take nearly 4 hours to
  complete.

  I am using havana on precise.

  Compute nodes:
  ii  nova-common  1:2013.2.2-0ubuntu1~cloud0
  ii  nova-compute 1:2013.2.2-0ubuntu1~cloud0
  ii  nova-compute-kvm 1:2013.2.2-0ubuntu1~cloud0

  Ceph:
  ii  ceph 0.67.4-0ubuntu2.2~cloud0
  ii  ceph-common  0.67.4-0ubuntu2.2~cloud0
  ii  libcephfs1   0.67.4-0ubuntu2.2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447653] [NEW] x509 keypair cannot be created if the given subject is too long

2015-04-23 Thread Claudiu Belu
Public bug reported:

Currently, the subject created for the x509 certificate is too long,
resulting in exceptions and failing to create the keypair. (
https://github.com/openstack/nova/blob/master/nova/crypto.py#L370 )

Bug detected during novaclient functional tests for commit:
https://review.openstack.org/#/c/136458/

Logs: http://logs.openstack.org/58/136458/24/check/check-novaclient-
dsvm-
functional/ae7b130/logs/screen-n-api.txt.gz#_2015-04-23_09_23_16_289

** Affects: nova
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Claudiu Belu (cbelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447653

Title:
  x509 keypair cannot be created if the given subject is too long

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently, the subject created for the x509 certificate is too long,
  resulting in exceptions and failing to create the keypair. (
  https://github.com/openstack/nova/blob/master/nova/crypto.py#L370 )

  Bug detected during novaclient functional tests for commit:
  https://review.openstack.org/#/c/136458/

  Logs: http://logs.openstack.org/58/136458/24/check/check-novaclient-
  dsvm-
  functional/ae7b130/logs/screen-n-api.txt.gz#_2015-04-23_09_23_16_289

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447732] [NEW] Cells: _heal_instances periodic task can lead to too much memory usage

2015-04-23 Thread Andrew Laski
Public bug reported:

During testing we found that if 220,000 instances match the updated_at
criteria in _heal_instances they are all pulled from the db which can
lead to huge memory usage and the oom killer stepping in.  The large
number of updated_at instances could happen organically, but in our case
was triggered by the _run_pending_deletes compute periodic task updating
a large number of instances.

** Affects: nova
 Importance: Undecided
 Assignee: Andrew Laski (alaski)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Andrew Laski (alaski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447732

Title:
  Cells: _heal_instances periodic task can lead to too much memory usage

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  During testing we found that if 220,000 instances match the updated_at
  criteria in _heal_instances they are all pulled from the db which can
  lead to huge memory usage and the oom killer stepping in.  The large
  number of updated_at instances could happen organically, but in our
  case was triggered by the _run_pending_deletes compute periodic task
  updating a large number of instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441386] Re: keystone-manage domain_config_upload command yield 'CacheRegion' object has no attribute 'expiration_time'

2015-04-23 Thread Thierry Carrez
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441386

Title:
  keystone-manage domain_config_upload command yield 'CacheRegion'
  object has no attribute 'expiration_time'

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Released

Bug description:
  Steps to reproduce the error:

  1. Install devstack
  2. enable domain-specific driver feature
  
   domain_specific_drivers_enabled=true
   domain_config_dir=/etc/keystone/domains

  3. create an domain-specific conf file in /etc/keystone/domains/. (i.e. 
/etc/keystone/domains/keystone.acme.conf)
  4. run 'keystone-manage domain_config_upload --domain-name acme' and you'll 
see a traceback similar to this

  keystone-manage domain_config_upload --domain-name acme
  4959 DEBUG keystone.notifications [-] Callback: 
`keystone.identity.core.Manager._domain_deleted` subscribed to event 
`identity.domain.deleted`. register_event_callback 
/opt/stack/keystone/keystone/notifications.py:292
  4959 DEBUG oslo_db.sqlalchemy.session [-] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py:509
  4959 CRITICAL keystone [-] AttributeError: 'CacheRegion' object has no 
attribute 'expiration_time'
  4959 TRACE keystone Traceback (most recent call last):
  4959 TRACE keystone   File /usr/local/bin/keystone-manage, line 6, in 
module
  4959 TRACE keystone exec(compile(open(__file__).read(), __file__, 'exec'))
  4959 TRACE keystone   File /opt/stack/keystone/bin/keystone-manage, line 
44, in module
  4959 TRACE keystone cli.main(argv=sys.argv, config_files=config_files)
  4959 TRACE keystone   File /opt/stack/keystone/keystone/cli.py, line 600, 
in main
  4959 TRACE keystone CONF.command.cmd_class.main()
  4959 TRACE keystone   File /opt/stack/keystone/keystone/cli.py, line 543, 
in main
  4959 TRACE keystone status = dcu.run()
  4959 TRACE keystone   File /opt/stack/keystone/keystone/cli.py, line 513, 
in run
  4959 TRACE keystone self.read_domain_configs_from_files()
  4959 TRACE keystone   File /opt/stack/keystone/keystone/cli.py, line 481, 
in read_domain_configs_from_files
  4959 TRACE keystone os.path.join(conf_dir, fname), domain_name)
  4959 TRACE keystone   File /opt/stack/keystone/keystone/cli.py, line 399, 
in upload_config_to_database
  4959 TRACE keystone self.resource_manager.get_domain_by_name(domain_name))
  4959 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/dogpile/cache/region.py, line 1040, in 
decorate
  4959 TRACE keystone should_cache_fn)
  4959 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/dogpile/cache/region.py, line 629, in 
get_or_create
  4959 TRACE keystone expiration_time = self.expiration_time
  4959 TRACE keystone AttributeError: 'CacheRegion' object has no attribute 
'expiration_time'
  4959 TRACE keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441393] Re: Keystone and Ceilometer unit tests fail with pymongo 3.0

2015-04-23 Thread Thierry Carrez
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441393

Title:
  Keystone and Ceilometer unit tests fail with pymongo 3.0

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ceilometer icehouse series:
  Invalid
Status in Ceilometer juno series:
  Invalid
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed
Status in Keystone juno series:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Messaging and Notifications Service (Zaqar):
  New

Bug description:
  
  pymongo 3.0 was released 2015-04-07. This causes keystone tests to fail:

  Traceback (most recent call last):
File keystone/tests/unit/test_cache_backend_mongo.py, line 357, in 
test_correct_read_preference
  region.set(random_key, dummyValue10)
 
  ...
File keystone/common/cache/backends/mongo.py, line 363, in 
get_cache_collection 
  self.read_preference = pymongo.read_preferences.mongos_enum(  
  
  AttributeError: 'module' object has no attribute 'mongos_enum'
  

  Traceback (most recent call last):
File keystone/tests/unit/test_cache_backend_mongo.py, line 345, in 
test_incorrect_read_preference
  random_key, dummyValue10)   
   
  ...
File keystone/common/cache/backends/mongo.py, line 168, in client 

  self.api.get_cache_collection()   

File keystone/common/cache/backends/mongo.py, line 363, in 
get_cache_collection   
  self.read_preference = pymongo.read_preferences.mongos_enum(  

  AttributeError: 'module' object has no attribute 'mongos_enum'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1441393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443598] Re: backend_argument containing a password leaked in logs

2015-04-23 Thread Thierry Carrez
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1443598

Title:
  backend_argument containing a password leaked in logs

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  In Progress
Status in Keystone juno series:
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Triaged

Bug description:
  The keystone.conf has an option backend_argument to set various
  options for the caching backend.  As documented, some of the potential
  values can contain a password.

  Snippet from
  http://docs.openstack.org/developer/keystone/developing.html#dogpile-
  cache-based-mongodb-nosql-backend

  [cache]
  # Global cache functionality toggle.
  enabled = True

  # Referring to specific cache backend
  backend = keystone.cache.mongo

  # Backend specific configuration arguments
  backend_argument = db_hosts:localhost:27017
  backend_argument = db_name:ks_cache
  backend_argument = cache_collection:cache
  backend_argument = username:test_user
  backend_argument = password:test_password

  As a result, passwords can be leaked to the keystone logs since the
  config options is not marked secret.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1443598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441827] Re: Cannot set per protocol remote_id_attribute

2015-04-23 Thread Thierry Carrez
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441827

Title:
  Cannot set per protocol remote_id_attribute

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Released

Bug description:
  Setup Federation with SSSD.  Worked OK with

  [federation]
  remote_id_attribute=foo

  but not with

  [kerberos]
  remote_id_attribute=foo

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441300] Re: keystone-manage man page updates

2015-04-23 Thread Thierry Carrez
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441300

Title:
  keystone-manage man page updates

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Released

Bug description:
  
  The keystone-manage man page doesn't show any of the new fernet commands, so 
it's out of date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
Milestone: kilo-rc3 = None

** No longer affects: nova/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Committed
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo - master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-04-23 Thread Thierry Carrez
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None = kilo-rc3

** Changed in: nova/kilo
   Status: New = In Progress

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova/kilo
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Committed
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) kilo series:
  In Progress
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo - master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447685] [NEW] Angular table: select-all checkbox shouldn't be selected if no rows

2015-04-23 Thread Kelly Domico
Public bug reported:

The Angularized table select-all checkbox is initially checked when
there aren't any rows in the table. It should be unchecked.

** Affects: horizon
 Importance: Undecided
 Assignee: Kelly Domico (kelly-domico)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Kelly Domico (kelly-domico)

** Description changed:

- The Angularize table select-all checkbox is checked when there aren't
- any rows in the table. It should be unchecked.
+ The Angularize table select-all checkbox is initially checked when there
+ aren't any rows in the table. It should be unchecked.

** Description changed:

- The Angularize table select-all checkbox is initially checked when there
- aren't any rows in the table. It should be unchecked.
+ The Angularized table select-all checkbox is initially checked when
+ there aren't any rows in the table. It should be unchecked.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447685

Title:
  Angular table: select-all checkbox shouldn't be selected if no rows

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Angularized table select-all checkbox is initially checked when
  there aren't any rows in the table. It should be unchecked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1447685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-04-23 Thread Thierry Carrez
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Committed
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo - master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440493] Re: Crash with python-memcached==1.5.4

2015-04-23 Thread Thierry Carrez
** No longer affects: keystone/liberty

** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1440493

Title:
  Crash with python-memcached==1.5.4

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Identity  (Keystone) Middleware:
  New

Bug description:
  There's some magic going on at line:
  
https://github.com/openstack/keystone/blob/2014.2.2/keystone/common/cache/_memcache_pool.py#L46

  This magic is broken due to the fact that python-memcached added a super(...) 
initalization at
  https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L218
  
https://github.com/linsomniac/python-memcached/commit/45403325e0249ff0f61d6ae449a7daeeb7e852e5

  Due to this change, keystone can no longer work with the latest
  python-memcached version:

  Traceback (most recent call last):
File keystone/common/wsgi.py, line 223, in __call__
  result = method(context, **params)
File keystone/identity/controllers.py, line 76, in create_user
  self.assignment_api.get_project(default_project_id)
File dogpile/cache/region.py, line 1040, in decorate
  should_cache_fn)
File dogpile/cache/region.py, line 651, in get_or_create
  async_creator) as value:
File dogpile/core/dogpile.py, line 158, in __enter__
  return self._enter()
File dogpile/core/dogpile.py, line 91, in _enter
  value = value_fn()
File dogpile/cache/region.py, line 604, in get_value
  value = self.backend.get(key)
File dogpile/cache/backends/memcached.py, line 149, in get
  value = self.client.get(key)
File keystone/common/cache/backends/memcache_pool.py, line 35, in 
_run_method
  with self.client_pool.acquire() as client:
File /usr/lib/python2.7/contextlib.py, line 17, in __enter__
  return self.gen.next()
File keystone/common/cache/_memcache_pool.py, line 97, in acquire
  conn = self.get(timeout=self._connection_get_timeout)
File eventlet/queue.py, line 293, in get
  return self._get()
File keystone/common/cache/_memcache_pool.py, line 155, in _get
  conn = ConnectionPool._get(self)
File keystone/common/cache/_memcache_pool.py, line 120, in _get
  conn = self._create_connection()
File keystone/common/cache/_memcache_pool.py, line 149, in 
_create_connection
  return _MemcacheClient(self.urls, **self._arguments)
File memcache.py, line 228, in __init__
  super(Client, self).__init__()
  TypeError: super(type, obj): obj must be an instance or subtype of type

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1440493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439857] Re: live-migration failure leave the port to BUILD state

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

** Tags removed: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439857

Title:
  live-migration failure leave the port to BUILD state

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  I've set up a lab where live migration can occur in block mode

  It seems that if I leave the default config, block live-migration
  fails;

  I can see that the port is left in BUILD state after the failure, but
  the VM is still running on the source host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443798] Re: Restrict netmask of CIDR to avoid DHCP resync

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

** Tags removed: kilo-backport-potential kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443798

Title:
  Restrict netmask of CIDR to avoid DHCP resync

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Confirmed

Bug description:
  If any tenant creates a subnet with a netmask of 31 or 32 in IPv4,
  IP addresses of network will fail to be generated, and that
  will cause constant resyncs and neutron-dhcp-agent malfunction.

  [Example operation 1]
   - Create subnet from CLI, with CIDR /31 (CIDR /32 has the same result).

  $ neutron subnet-create net 192.168.0.0/31 --name sub
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 192.168.0.0/31   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 192.168.0.1  |
  | host_routes   |  |
  | id| 42a91f59-1c2d-4e33-9033-4691069c5e4b |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | sub  |
  | network_id| 65cc6b46-17ec-41a8-9fe4-5bf93fc25d1e |
  | subnetpool_id |  |
  | tenant_id | 4ffb89e718d346b48fdce2ac61537bce |
  +---+--+

  [Example operation 2]
   - Create subnet from API, with cidr /32 (CIDR /31 has the same result).

  $ curl -i -X POST -H content-type:application/json -d '{subnet: { name: 
badsub, cidr : 192.168.0.0/32, ip_version: 4, network_id: 8
  8143cda-5fe7-45b6-9245-b1e8b75d28d8}}' -H x-auth-token:$TOKEN 
http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-4e7e74c0-0190-4a69-a9eb-93d545e8aeef
  Date: Thu, 16 Apr 2015 19:21:20 GMT

  {subnet: {name: badsub, enable_dhcp: true, network_id:
  88143cda-5fe7-45b6-9245-b1e8b75d28d8, tenant_id:
  4ffb89e718d346b48fdce2ac61537bce, dns_nameservers: [],
  gateway_ip: 192.168.0.1, ipv6_ra_mode: null, allocation_pools:
  [], host_routes: [], ip_version: 4, ipv6_address_mode: null,
  cidr: 192.168.0.0/32, id: d210d5fd-8b3b-4c0e-b5ad-
  41798bd47d97, subnetpool_id: null}}

  [Example operation 3]
   - Create subnet from API, with empty allocation_pools.

  $ curl -i -X POST -H content-type:application/json -d '{subnet: { name: 
badsub, cidr : 192.168.0.0/24, allocation_pools: [], ip_version: 4, 
network_id: 88143cda-5fe7-45b6-9245-b1e8b75d28d8}}' -H 
x-auth-token:$TOKEN http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-54ce81db-b586-4887-b60b-8776a2ebdb4e
  Date: Thu, 16 Apr 2015 19:18:21 GMT

  {subnet: {name: badsub, enable_dhcp: true, network_id:
  88143cda-5fe7-45b6-9245-b1e8b75d28d8, tenant_id:
  4ffb89e718d346b48fdce2ac61537bce, dns_nameservers: [],
  gateway_ip: 192.168.0.1, ipv6_ra_mode: null, allocation_pools:
  [], host_routes: [], ip_version: 4, ipv6_address_mode: null,
  cidr: 192.168.0.0/24, id: abc2dca4-bf8b-46f5-af1a-
  0a1049309854, subnetpool_id: null}}

  [Trace log]
  2015-04-17 04:23:27.907 16641 DEBUG oslo_messaging._drivers.amqp [-] 
UNIQUE_ID is e0a6a81a005d4aa0b40130506afa0267. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-17 04:23:27.979 16641 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 88143cda-5fe7-45b6-9245-b1e8b75d28d8.
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/dhcp/agent.py, line 112, in call_driver
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 201, in enable
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, 

[Yahoo-eng-team] [Bug 1441382] Re: IPv6 SLAAC subnet Tempest tests fail due to IntegrityError

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441382

Title:
  IPv6 SLAAC subnet Tempest tests fail due to IntegrityError

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  If 2 Tempest tests are run concurrently as follows:
  Test 1: Any test that uses DHCP and cleans up the DHCP port as part of 
cleanup. For example:
  
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcp_stateful_router
  Test 2: Any test that creates an IPv6 SLAAC subnet
  
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_64_subnets
  and this patch has been applied to eliminate DB access deadlocks:
  https://review.openstack.org/#/c/170690/
  Then occasionally Test 2 will fail in subnet create, and the following error 
can be observed
  in the neutron server log (q-svc.log):

  TRACE neutron.api.v2.resource DBReferenceError: (IntegrityError)
  (1452, 'Cannot add or update a child row: a foreign key constraint
  fails (`neutron`.`ipallocations`, CONSTRAINT `ipallocations_ibfk_2`
  FOREIGN KEY (`port_id`) REFERENCES `ports` (`id`) ON DELETE CASCADE)')
  'INSERT INTO ipallocations (port_id, ip_address, subnet_id,
  network_id) VALUES (%s, %s, %s, %s)' ('dc359c7e-59b1-46d2-966f-
  194bc7fa0ffb', '2003::f816:3eff:fea6:41d3', 'dc9f1ac8-92c1-4f0e-
  8b86-524824861fa3', '45e4e8ea-81ed-46ad-a753-cdca392adb2a')

  Here is a full traceback:

  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 461, in create
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 794, in create_subnet
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource result, 
mech_context = self._create_subnet_db(context, subnet)
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 785, in 
_create_subnet_db
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource result = 
super(Ml2Plugin, self).create_subnet(context, subnet)
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1349, in 
create_subnet
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource raise 
n_exc.BadRequest(resource='subnets', msg=msg)
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1285, in 
_create_subnet_from_implicit_pool
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource # If this subnet 
supports auto-addressing, then update any
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1379, in 
_add_auto_addrs_on_network_ports
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource except 
db_exc.DBReferenceError:
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 470, 
in __exit__
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource self.rollback()
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 467, 
in __exit__
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource self.commit()
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 377, 
in commit
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource self._prepare_impl()
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 357, 
in _prepare_impl
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource self.session.flush()
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1919, 
in flush
  2015-04-07 16:39:16.026 TRACE neutron.api.v2.resource self._flush(objects)
  2015-04-07 16:39:16.026 TRACE 

[Yahoo-eng-team] [Bug 1442494] Re: test_add_list_remove_router_on_l3_agent race-y for dvr

2015-04-23 Thread Thierry Carrez
** Tags removed: kilo-rc-potential

** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442494

Title:
  test_add_list_remove_router_on_l3_agent race-y for dvr

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Logstash:

  message:in test_add_list_remove_router_on_l3_agent AND build_name
  :check-tempest-dsvm-neutron-dvr

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hZGRfbGlzdF9yZW1vdmVfcm91dGVyX29uX2wzX2FnZW50XCIgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay10ZW1wZXN0LWRzdm0tbmV1dHJvbi1kdnJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyODY0OTgxNDY3MSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Change [1], enabled by [2], exposed an intermittent failure when
  determining whether an agent is eligible for binding or not.

  [1] https://review.openstack.org/#/c/154289/
  [2] https://review.openstack.org/#/c/165246/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446784] Re: VLAN Transparency and MTU advertisement coexistence problem

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

** Tags removed: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446784

Title:
  VLAN Transparency and MTU advertisement coexistence problem

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Recently added features VLAN Transparency and MTU advertisement can't
  be stacked together.   When attempting to stack them together the
  following traceback shows in the log.

  
  Test Bed:
   - Kilo based
   - LinuxBridge with VxLAN
   - Mutli node - node1: controller/compute, node2: compute, node3: compute.

  I can stack and use each of these features individually just not
  together.

  
  Controller local.conf details 

  [[post-config|/etc/neutron/neutron.conf]]
  [DEFAULT]
  advertise_mtu = True
  network_device_mtu = 9000
  vlan_transparent = True 

  [[post-config|/etc/nova/nova.conf]]
  [DEFAULT]
  network_device_mtu = 9000

  [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
  [ml2]
  segment_mtu = 1000
  path_mtu = 1200

  [vxlan]
  enable_vxlan=true
  local_ip=210.168.1.156

  [linux_bridge]
  tenant_network_type=vxlan
  tunnel_type=vxlan

  2015-04-14 16:24:18.642 INFO neutron.plugins.ml2.db 
[req-868cf7c5-464f-4c88-b87c-538f118896cc admin 
41efb914da9e472fb2bf22d52ea7aa6b] Added segment 
df41aba2-eaf6-451f-85db-5d0f33439709 of type vxlan for network 
113dfef5-7074-4049-ac69-5bb5e233564f
  2015-04-14 16:24:18.662 ERROR oslo_db.sqlalchemy.exc_filters 
[req-868cf7c5-464f-4c88-b87c-538f118896cc admin 
41efb914da9e472fb2bf22d52ea7aa6b] DB exception wrapped.
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 889, 
in _execute_context
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters context = 
constructor(dialect, self, conn, *args)
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
573, in _init_compiled
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters 
param.append(processors[key](compiled_params[key]))
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/processors.py, line 56, in 
boolean_to_int
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters return 
int(value)
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters TypeError: int() 
argument must be a string or a number, not 'object'
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters 
  2015-04-14 16:24:18.665 ERROR neutron.api.v2.resource 
[req-868cf7c5-464f-4c88-b87c-538f118896cc admin 
41efb914da9e472fb2bf22d52ea7aa6b] create failed
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 461, in create
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 613, in create_network
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource network)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 131, in wrapper
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 609, in 
_create_network_with_retries
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource return 
self._create_network_db(context, network)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 601, in 
_create_network_db
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource result['id'], 
network)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 973, in 
update_network
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource subnets = 
self._get_subnets_by_network(context, id)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 135, in 
_get_subnets_by_network
  2015-04-14 16:24:18.665 TRACE 

[Yahoo-eng-team] [Bug 1446642] Re: Updated Protocol named constants

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446642

Title:
  Updated Protocol named constants

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  The L3 protocol name constants should be removed from 
neutron/plugins/common/constants.py because for instance, there
  exists a constant already in neutron/common/constants.py for tcp - 
PROTO_NAME_TCP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442615] Re: don't ship ml2_conf_odl.ini

2015-04-23 Thread Thierry Carrez
** Tags removed: kilo-rc-potential

** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442615

Title:
  don't ship ml2_conf_odl.ini

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Neutron ships ml2_conf_odl.ini while it's also packaged into
  decomposed networking-odl [1].

  [1]: https://git.openstack.org/cgit/stackforge/networking-
  odl/tree/etc/neutron/plugins/ml2/ml2_conf_odl.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446657] Re: Add Kilo release milestone

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446657

Title:
  Add Kilo release milestone

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  We do this for each release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427474] Re: IPv6 SLAAC subnet create should update ports on net

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427474

Title:
  IPv6 SLAAC subnet create should update ports on net

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  If ports are first created on a network, and then an IPv6 SLAAC
  or DHCPv6-stateless subnet is created on that network, then the
  ports created prior to the subnet create are not getting
  automatically updated (associated) with addresses for the
  SLAAC/DHCPv6-stateless subnet, as required.

  Note that this problem was discussed in the Neutron
  multiple-ipv6-prefixes blueprint, but is being addressed
  with a separate  Neutron bug since this is a bug that can
  potentially be backported to Juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427474/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416933] Re: Race condition in Ha router updating port status

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416933

Title:
  Race condition in Ha router updating port status

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  When L2 agent call 'get_devices_details_list', the ports in this l2 agent 
will firstly be updated to BUILD, then 'update_device_up' will update them to 
ACTIVE, but for a Ha router which has two l3 agents, there will be race 
condition.
  reproduce progress(not always happen, but much time):
  1.  'router-interface-add' add a subnet to Ha router
  2.  'router-gateway-set' set router gateway
  the gateway port status sometimes will always be BUILD

  in 'get_device_details', the port status will be update, but I think
  if a port status is ACTIVE and port['admin_state_up'] is True, this
  port should not be update,

  def get_device_details(self, rpc_context, **kwargs):
  ..
  ..
  new_status = (q_const.PORT_STATUS_BUILD if port['admin_state_up']
else q_const.PORT_STATUS_DOWN)
  if port['status'] != new_status:
  plugin.update_port_status(rpc_context,
port_id,
new_status,
host)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447781] [NEW] Exponential ajax calls when refreshing multiple rows in horizon tables.

2015-04-23 Thread nubeliu_dev
Public bug reported:

Exponential ajax calls when refreshing multiple rows in horizon tables.
When you launch multiple instances, each instance status refresh makes
an ajax call to any status_unknown instance, when the number is big
(50) it kill the client CPU.

horizon/static/horizon/js/horizon.tables.js

Original:

horizon.datatables = {
  update: function () {
var $rows_to_update = $('tr.status_unknown.ajax-update');
if ($rows_to_update.length) {
  var interval = $rows_to_update.attr('data-update-interval'),
$table = $rows_to_update.closest('table'),
decay_constant = $table.attr('decay_constant');

  // Do not update this row if the action column is expanded
  if ($rows_to_update.find('.actions_column .btn-group.open').length) {
// Wait and try to update again in next interval instead
setTimeout(horizon.datatables.update, interval);
// Remove interval decay, since this will not hit server
$table.removeAttr('decay_constant');
return;
  }
  // Trigger the update handlers.
  $rows_to_update.each(function(index, row) {
var $row = $(this),
  $table = $row.closest('table.datatable');
horizon.ajax.queue({
  url: $row.attr('data-update-url'),
  error: function (jqXHR, textStatus, errorThrown) {
switch (jqXHR.status) {
  // A 404 indicates the object is gone, and should be removed from 
the table
  case 404:
// Update the footer count and reset to default empty row if 
needed
var $footer, row_count, footer_text, colspan, template, params, 
$empty_row;

// existing count minus one for the row we're removing
row_count = horizon.datatables.update_footer_count($table, -1);

if(row_count === 0) {
  colspan = $table.find('th[colspan]').attr('colspan');
  template = 
horizon.templates.compiled_templates[#empty_row_template];
  params = {
  colspan: colspan,
  no_items_label: gettext(No items to display.)
  };
  empty_row = template.render(params);
  $row.replaceWith(empty_row);
} else {
  $row.remove();
}
// Reset tablesorter's data cache.
$table.trigger(update);
// Enable launch action if quota is not exceeded
horizon.datatables.update_actions();
break;
  default:
horizon.utils.log(gettext(An error occurred while updating.));
$row.removeClass(ajax-update);
$row.find(i.ajax-updating).remove();
break;
}
  },
  success: function (data, textStatus, jqXHR) {
var $new_row = $(data);

if ($new_row.hasClass('status_unknown')) {
  var spinner_elm = $new_row.find(td.status_unknown:last);
  var imagePath = $new_row.find('.btn-action-required').length  0 ?
dashboard/img/action_required.png:
dashboard/img/loading.gif;
  imagePath = STATIC_URL + imagePath;
  spinner_elm.prepend(
$(div)
  .addClass(loading_gif)
  .append($(img).attr(src, imagePath)));
}

// Only replace row if the html content has changed
if($new_row.html() !== $row.html()) {
  if($row.find('.table-row-multi-select:checkbox').is(':checked')) {
// Preserve the checkbox if it's already clicked

$new_row.find('.table-row-multi-select:checkbox').prop('checked', true);
  }
  $row.replaceWith($new_row);
  // Reset tablesorter's data cache.
  $table.trigger(update);
  // Reset decay constant.
  $table.removeAttr('decay_constant');
  // Check that quicksearch is enabled for this table
  // Reset quicksearch's data cache.
  if ($table.attr('id') in horizon.datatables.qs) {
horizon.datatables.qs[$table.attr('id')].cache();
  }
}
  },
  complete: function (jqXHR, textStatus) {
// Revalidate the button check for the updated table
horizon.datatables.validate_button();

// Set interval decay to this table, and increase if it already 
exist
if(decay_constant === undefined) {
  decay_constant = 1;
} else {
  decay_constant++;
}
$table.attr('decay_constant', decay_constant);
// Poll until there are no rows in an unknown state on the page.
next_poll = interval * decay_constant;
// Limit the interval to 30 secs
if(next_poll  30 * 1000) { 

[Yahoo-eng-team] [Bug 1444397] Re: single allowed address pair rule can exhaust entire ipset space

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

** Tags removed: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444397

Title:
  single allowed address pair rule can exhaust entire ipset space

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  The hash type used by the ipsets is 'ip' which explodes a CIDR into
  every member address (i.e. 10.100.0.0/16 becomes 65k entries). The
  allowed address pairs extension allows CIDRs so a single allowed
  address pair set can exhaust the entire IPset and break the security
  group rules for a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444201] Re: change in ipset elements breaks agent

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

** Tags removed: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444201

Title:
  change in ipset elements breaks agent

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  If an ipset element count changes (e.g. due to
  https://review.openstack.org/#/c/170328/), the previous elements will
  cause the create command to fail even though the -exist flag is being
  passed. This prevents the agent from setting up the ipsets correctly.

  Example exception:
  015-03-30 15:31:40.887 DEBUG oslo_concurrency.lockutils 
[req-13ed77d8-6224-44ba-a2a9-becaa991663c None None] Releasing file lock 
/opt/stack/data/neutron/lock/neutron-
  ipset after holding it for 0.005s release 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:227
  2015-03-30 15:31:40.888 DEBUG oslo_concurrency.lockutils 
[req-13ed77d8-6224-44ba-a2a9-becaa991663c None None] Lock ipset released by 
set_members :: held 0.005s inne
  r /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:456
  2015-03-30 15:31:40.888 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-13ed77d8-6224-44ba-a2a9-becaa991663c None None] Error while processing VIF 
ports
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.p
  y, line 1586, in rpc_loop
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.p
  y, line 1350, in process_network_ports
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/agent/securitygroups_rpc.py, line 360, in se
  tup_port_filters
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/agent/securitygroups_rpc.py, line 219, in de
  corated_function
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent *args, **kwargs)
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/agent/securitygroups_rpc.py, line 244, in pr
  epare_devices_filter
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent security_groups, 
security_group_member_ips)
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next()
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/agent/firewall.py, line 106, in defer_apply
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.filter_defer_apply_off()
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py, line 659,
  in filter_defer_apply_off
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.unfiltered_ports)
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent
File /opt/stack/neutron/neutron/agent/linux/iptables_firewall.py, line 
155, in _setup_chains_apply
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self._setup_chain(port, 
INGRESS_DIRECTION)
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py, line 182, in 
_setup_chain
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self._add_rules_by_security_group(port, DIRECTION)
  2015-03-30 15:31:40.888 32755 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py, line 411, in 
_add_rules_by_security_group
  2015-03-30 15:31:40.888 32755 TRACE 

[Yahoo-eng-team] [Bug 1440183] Re: DBDeadlock on subnet allocation

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

** Tags removed: kilo-backport-potential kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440183

Title:
  DBDeadlock on subnet allocation

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  It looks like this is starting to hit:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiREJEZWFkbG9jazogKE9wZXJhdGlvbmFsRXJyb3IpICgxMjA1LCAnTG9jayB3YWl0IHRpbWVvdXQgZXhjZWVkZWQ7IHRyeSByZXN0YXJ0aW5nIHRyYW5zYWN0aW9uJylcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyODA5MDM4MzgzMSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280033] Re: Remove dependent module py3kcompat

2015-04-23 Thread Sergey Reshetnyak
Not affected on sahara now

** Changed in: python-saharaclient
   Status: Fix Committed = Fix Released

** Changed in: sahara
   Importance: Low = Undecided

** Changed in: sahara
   Status: In Progress = Invalid

** Changed in: sahara
 Assignee: Lee Li (lilinguo) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280033

Title:
  Remove dependent module py3kcompat

Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for Ceilometer:
  Fix Committed
Status in Python client library for Cinder:
  Fix Committed
Status in Python client library for Glance:
  In Progress
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Nova:
  Fix Released
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in Trove client binding:
  Fix Released
Status in OpenStack Data Processing (Sahara):
  Invalid
Status in OpenStack contribution dashboard:
  Fix Released

Bug description:
  Everything in module py3kcompat is ready in six  1.4.0, we don't need
  this module now . It was removed from oslo-incubator recently, see
  https://review.openstack.org/#/c/71591/.  This make us don't need
  maintain this module any more, use six directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1280033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436877] Re: metadef JSON files need updating for the Kilo release

2015-04-23 Thread Thierry Carrez
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
   Importance: Undecided = Medium

** Changed in: glance/kilo
Milestone: None = kilo-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1436877

Title:
  metadef JSON files need updating for the Kilo release

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance kilo series:
  New

Bug description:
  Since Juno, some new hardware properties have been added to Nova that
  should be included in the Kilo versions of the metadef JSON files.
  This bug, would add in new properties or objects found in Nova and
  update properties or objects if the definitions have changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1436877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439061] Re: Plugin types are not exposed to the client

2015-04-23 Thread Thierry Carrez
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
Milestone: None = kilo-rc2

** Changed in: glance/kilo
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1439061

Title:
  Plugin types are not exposed to the client

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance kilo series:
  New

Bug description:
  Glance Search clients currently don't have anyway to know what types
  are available.  Search clients need to know the type to include in
  the search request for the  particular resource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1439061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447380] Re: wrong cinder.conf.sample generation: missing directives for keystone_authtoken (at least)

2015-04-23 Thread Matt Riedemann
nova has the same issue according to zigo.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447380

Title:
  wrong cinder.conf.sample generation: missing directives for
  keystone_authtoken (at least)

Status in Cinder:
  In Progress
Status in Cinder kilo series:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi,

  When building the Debian Kilo RC1 package of Cinder, I'm generating
  the cinder.conf.sample file using (the same way tox would do):

  tools/config/generate_sample.sh -b . -p cinder -o etc/cinder

  Unfortunately, this resulted in a broken cinder.conf.sample, with at
  least keystone_authtoken missing directives. It stops at
  #hash_algorithms = md5 and all what's after is now missing.
  auth_host, auth_port, auth_protocol, identity_uri, admin_token,
  admin_user, admin_password and admin_tenant_name are missing
  directives from the configuration file. patrickeast on IRC gave me a
  file (which I supposed was generated using devstack) and latest trunk,
  and it seems there's the exact same issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1447380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447818] [NEW] Improve bytes, gb, and mb angular filters

2015-04-23 Thread Matthew D. Wood
Public bug reported:

In the horizon/static/horizon/js/angular/filters/filters.js file,
there's 3 filters that deal with disk-sizings:  bytes, gb, and mb.

The bytes filter should likely accept a target unit input parameter
and lock all output to that unit instead of auto-determining the unit.

We should also probably complete the whole set of kb, mb, gb, tb, pb.

Also, the existing gb and mb filters are pretty naive and simplistic.
They should likely do more than just check for numbers and append MB
or GB to the value.  Candidate improvements include:

A) Checking for a string that is already of the correct format
B) ???

** Affects: horizon
 Importance: Undecided
 Assignee: Matthew D. Wood (woodm1979)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Matthew D. Wood (woodm1979)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447818

Title:
  Improve bytes, gb, and mb angular filters

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the horizon/static/horizon/js/angular/filters/filters.js file,
  there's 3 filters that deal with disk-sizings:  bytes, gb, and mb.

  The bytes filter should likely accept a target unit input parameter
  and lock all output to that unit instead of auto-determining the unit.

  We should also probably complete the whole set of kb, mb, gb, tb, pb.

  Also, the existing gb and mb filters are pretty naive and simplistic.
  They should likely do more than just check for numbers and append MB
  or GB to the value.  Candidate improvements include:

  A) Checking for a string that is already of the correct format
  B) ???

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1447818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284343] Re: RouterNotFound not handled correctly

2015-04-23 Thread Matthew D. Wood
The review has been abandoned for quite a long time.  The quick-fix
wasn't necessary.

** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1284343

Title:
  RouterNotFound not handled correctly

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When a call to the neutronclient attempts to find a router that
  doesn't exist, the neutronclient returns a very generic
  NeutronClientException.  This, in turn, isn't handled correctly within
  horizon.exceptions.handle; it's treated as a RECOVERABLE error instead
  of a NOT_FOUND error.  Clearly, this is a bad thing, and redirects
  don't happen correctly and all manner of other bad things can happen.

  I've opened up a bug against the neutronclient here:
  https://bugs.launchpad.net/neutron/+bug/1284317

  
  There's 2 solutions to this problem:

  A)  Fix the neutronclient code to actually have a RouterNotFoundClient
  exception (just like the NetworkNotFoundClient and PortNotFoundClient
  exceptions) and then add that to the appropriate bucket:
  openstack_dashboard.exceptions.NOT_FOUND .  Clearly the downsides of
  this approach are that we have to wait for the neutron bug to be
  fixed, and then our code would then depend upon that fix.

  B)  Temp-fix things in the mean time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1284343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314801] Re: number format is not localized

2015-04-23 Thread Thierry Carrez
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None = kilo-rc2

** Changed in: horizon/kilo
   Status: New = In Progress

** Changed in: horizon/kilo
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1314801

Title:
  number format is not localized

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) kilo series:
  In Progress

Bug description:
  The use of commas, decimal points and spaces in numbers is locale
  sensitive.  For example the number 1,000 might be read as 1000 in the
  US, but as 1 in FR.

  http://en.wikipedia.org/wiki/Decimal_mark#Examples_of_use

  Specifically the Create Volume page does not honor the local format.
  I suspect there may be other places that are wrong as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1314801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438040] Re: fdb entries can't be removed when a VM is migrated

2015-04-23 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

** Tags removed: kilo-backport-potential kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438040

Title:
  fdb entries can't be removed when a VM is migrated

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  this problem can be reprodeced as bellow:
  1. vm A in computeA, vm B in computeB, l2 pop enable;
  2. vmB continue ping vmA 
  3. live migrate vmA to computeB 
  4. when live-migrate finish, vmB ping vmA failed

  the reason is bellow, in l2pop driver, when vmA migrate to computeB, port 
status change form BUILD to ACTIVE,
  it add the port to  self.migrated_ports when port status is ACTIVE, but 
'remove_fdb_entries' in port status is BUILD :
  def update_port_postcommit(self, context):
  ...
  ...
  elif (context.host != context.original_host
  and context.status == const.PORT_STATUS_ACTIVE
  and not self.migrated_ports.get(orig['id'])):
  # The port has been migrated. We have to store the original
  # binding to send appropriate fdb once the port will be set
  # on the destination host
  self.migrated_ports[orig['id']] = (
  (orig, context.original_host))
  elif context.status != context.original_status:
  if context.status == const.PORT_STATUS_ACTIVE:
  self._update_port_up(context)
  elif context.status == const.PORT_STATUS_DOWN:
  fdb_entries = self._update_port_down(
  context, port, context.host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)
  elif context.status == const.PORT_STATUS_BUILD:
  orig = self.migrated_ports.pop(port['id'], None)
  if orig:
  original_port = orig[0]
  original_host = orig[1]
  # this port has been migrated: remove its entries from fdb
  fdb_entries = self._update_port_down(
  context, original_port, original_host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-04-23 Thread Thierry Carrez
** Changed in: cinder/kilo
   Status: Fix Committed = In Progress

** Changed in: keystone/kilo
   Status: In Progress = Fix Committed

** Changed in: oslo-incubator
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Committed
Status in Cinder kilo series:
  In Progress
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Committed
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo - master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447472] [NEW] versionutils.deprecated is used to mark callables as deprecated

2015-04-23 Thread Dave Chen
Public bug reported:

https://github.com/openstack/keystone/blob/master/keystone/token/providers/common.py#L420
~ #L423 intends to use versionutils.deprecated to deprecate the passing
of `extras`, this is not the valid case versionutils.deprecated could be
implied.

see:
https://github.com/openstack/keystone/blob/master/keystone/openstack/common/versionutils.py#L165
for where `versionutils.deprecated` could be used.

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1447472

Title:
  versionutils.deprecated is used to mark callables as deprecated

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  
https://github.com/openstack/keystone/blob/master/keystone/token/providers/common.py#L420
  ~ #L423 intends to use versionutils.deprecated to deprecate the
  passing of `extras`, this is not the valid case
  versionutils.deprecated could be implied.

  see:
  
https://github.com/openstack/keystone/blob/master/keystone/openstack/common/versionutils.py#L165
  for where `versionutils.deprecated` could be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1447472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378558] Re: Plugin panel not listed in configured panel group

2015-04-23 Thread Lin Hua Cheng
** Changed in: horizon
 Assignee: Lin Hua Cheng (lin-hua-cheng) = Janet Yu (jwy)

** Changed in: horizon
Milestone: None = liberty-1

** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None = kilo-rc2

** Tags removed: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378558

Title:
  Plugin panel not listed in configured panel group

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  When adding panel Foo to the Admin dashboard's System panel group via
  the openstack_dashboard/local/enabled/ directory, with something like:

  PANEL = 'foo'
  PANEL_DASHBOARD = 'admin'
  PANEL_GROUP = 'admin'
  ADD_PANEL = 'openstack_dashboard.dashboards.admin.foo.panel.Foo'

  Foo appears under the panel group Other instead of System. This is the
  error in the Apache log:

  Could not process panel foo: 'tuple' object has no attribute 'append'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447463] [NEW] glance.tests.functional.v2.test_images.TestImages.test_download_random_access failed

2015-04-23 Thread John Haan
Public bug reported:

The error message is below.

Traceback (most recent call last):
  File tools/colorizer.py, line 326, in module
if runner.run(test).wasSuccessful():
  File /usr/lib/python2.7/unittest/runner.py, line 158, in run
result.printErrors()
  File tools/colorizer.py, line 305, in printErrors
self.printErrorList('FAIL', self.failures)
  File tools/colorizer.py, line 315, in printErrorList
self.stream.writeln(%s % err)
  File /usr/lib/python2.7/unittest/runner.py, line 24, in writeln
self.write(arg)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 600-602: 
ordinal not in range(128)

There is get method from glance server.

response = requests.get(path, headers=headers)

The type of text in this response is unicode, which is
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x02\xff\x8b\x02\x00gW\xbcY\x01\x00\x00\x00'

ascii codec can't encode this unicode type.

This issue is also related other unit test like test_image_life_cycle.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447463

Title:
  glance.tests.functional.v2.test_images.TestImages.test_download_random_access
  failed

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The error message is below.

  Traceback (most recent call last):
File tools/colorizer.py, line 326, in module
  if runner.run(test).wasSuccessful():
File /usr/lib/python2.7/unittest/runner.py, line 158, in run
  result.printErrors()
File tools/colorizer.py, line 305, in printErrors
  self.printErrorList('FAIL', self.failures)
File tools/colorizer.py, line 315, in printErrorList
  self.stream.writeln(%s % err)
File /usr/lib/python2.7/unittest/runner.py, line 24, in writeln
  self.write(arg)
  UnicodeEncodeError: 'ascii' codec can't encode characters in position 
600-602: ordinal not in range(128)

  There is get method from glance server.

  response = requests.get(path, headers=headers)

  The type of text in this response is unicode, which is
  '\x1f\x8b\x08\x00\x00\x00\x00\x00\x02\xff\x8b\x02\x00gW\xbcY\x01\x00\x00\x00'

  ascii codec can't encode this unicode type.

  This issue is also related other unit test like test_image_life_cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447459] Re: stable/kilo fetches master translations

2015-04-23 Thread Łukasz Jernaś
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447459

Title:
  stable/kilo fetches master translations

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack I18n  L10n:
  New

Bug description:
  Stable kilo fetches master (or lates) translations instead of the
  *-kilo resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1447459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446784] Re: VLAN Transparency and MTU advertisement coexistence problem

2015-04-23 Thread Thierry Carrez
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None = kilo-rc2

** Changed in: neutron
Milestone: kilo-rc2 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446784

Title:
  VLAN Transparency and MTU advertisement coexistence problem

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  New

Bug description:
  Recently added features VLAN Transparency and MTU advertisement can't
  be stacked together.   When attempting to stack them together the
  following traceback shows in the log.

  
  Test Bed:
   - Kilo based
   - LinuxBridge with VxLAN
   - Mutli node - node1: controller/compute, node2: compute, node3: compute.

  I can stack and use each of these features individually just not
  together.

  
  Controller local.conf details 

  [[post-config|/etc/neutron/neutron.conf]]
  [DEFAULT]
  advertise_mtu = True
  network_device_mtu = 9000
  vlan_transparent = True 

  [[post-config|/etc/nova/nova.conf]]
  [DEFAULT]
  network_device_mtu = 9000

  [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
  [ml2]
  segment_mtu = 1000
  path_mtu = 1200

  [vxlan]
  enable_vxlan=true
  local_ip=210.168.1.156

  [linux_bridge]
  tenant_network_type=vxlan
  tunnel_type=vxlan

  2015-04-14 16:24:18.642 INFO neutron.plugins.ml2.db 
[req-868cf7c5-464f-4c88-b87c-538f118896cc admin 
41efb914da9e472fb2bf22d52ea7aa6b] Added segment 
df41aba2-eaf6-451f-85db-5d0f33439709 of type vxlan for network 
113dfef5-7074-4049-ac69-5bb5e233564f
  2015-04-14 16:24:18.662 ERROR oslo_db.sqlalchemy.exc_filters 
[req-868cf7c5-464f-4c88-b87c-538f118896cc admin 
41efb914da9e472fb2bf22d52ea7aa6b] DB exception wrapped.
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 889, 
in _execute_context
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters context = 
constructor(dialect, self, conn, *args)
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
573, in _init_compiled
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters 
param.append(processors[key](compiled_params[key]))
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/processors.py, line 56, in 
boolean_to_int
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters return 
int(value)
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters TypeError: int() 
argument must be a string or a number, not 'object'
  2015-04-14 16:24:18.662 TRACE oslo_db.sqlalchemy.exc_filters 
  2015-04-14 16:24:18.665 ERROR neutron.api.v2.resource 
[req-868cf7c5-464f-4c88-b87c-538f118896cc admin 
41efb914da9e472fb2bf22d52ea7aa6b] create failed
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 461, in create
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 613, in create_network
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource network)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 131, in wrapper
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 609, in 
_create_network_with_retries
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource return 
self._create_network_db(context, network)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 601, in 
_create_network_db
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource result['id'], 
network)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 973, in 
update_network
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource subnets = 
self._get_subnets_by_network(context, id)
  2015-04-14 16:24:18.665 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 

[Yahoo-eng-team] [Bug 1445335] Re: create/delete flavor permissions should be controlled by policy.json

2015-04-23 Thread Thierry Carrez
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New = In Progress

** Changed in: nova/kilo
   Importance: Undecided = High

** Changed in: nova/kilo
Milestone: None = kilo-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445335

Title:
  create/delete flavor permissions should be controlled by policy.json

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  In Progress
Status in OpenStack Security Advisories:
  Invalid

Bug description:
  The create/delete flavor rest api always expects the user to be of
  admin privileges and ignores the rule defined in the nova/policy.json.
  This behavior is observed after these changes 
  https://review.openstack.org/#/c/150352/.

  The expected behavior is that the permissions are controlled as per
  the rule defined in the policy file and should not mandate that only
  an admin should be able to create/delete a flavor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447489] [NEW] Nova libvirt-xen driver doesn't support neutron

2015-04-23 Thread Andrew Shvartz
Public bug reported:

Per http://www.gossamer-threads.com/lists/xen/devel/374061  Xen Wiki 
Instructions are supposed to work only
with legacy ( nova ) nova-networking . See 
http://wiki.xen.org/wiki/OpenStack_via_DevStack.
In mean time it seems as a serious disadvantage. Limiting driver ability to 
work only with Ubuntu Juno and higher releases.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447489

Title:
  Nova libvirt-xen driver doesn't support neutron

Status in OpenStack Compute (Nova):
  New

Bug description:
  Per http://www.gossamer-threads.com/lists/xen/devel/374061  Xen Wiki 
Instructions are supposed to work only
  with legacy ( nova ) nova-networking . See 
http://wiki.xen.org/wiki/OpenStack_via_DevStack.
  In mean time it seems as a serious disadvantage. Limiting driver ability to 
work only with Ubuntu Juno and higher releases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410854] Re: NoNetworkFoundInMaximumAllowedAttempts with multipe API workers

2015-04-23 Thread Attila Fazekas
*** This bug is a duplicate of bug 1382064 ***
https://bugs.launchpad.net/bugs/1382064

** This bug has been marked a duplicate of bug 1382064
   Failure to allocate tunnel id when creating networks concurrently

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1410854

Title:
  NoNetworkFoundInMaximumAllowedAttempts with multipe API workers

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When neutron configured with, in regular devstack job it Fails to
  create several networks in regular tempest run.

  iniset /etc/neutron/neutron.conf DEFAULT api_workers 4

  http://logs.openstack.org/82/140482/2/check/check-tempest-dsvm-
  neutron-
  full/95aea86/logs/screen-q-svc.txt.gz?#_2015-01-14_13_56_07_268

  2015-01-14 13:56:07.267 2814 WARNING neutron.plugins.ml2.drivers.helpers 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] Allocate vxlan segment from 
pool failed after 10 failed attempts
  2015-01-14 13:56:07.268 2814 ERROR neutron.api.v2.resource 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] create failed
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 451, in create
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 502, in 
create_network
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource tenant_id)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/managers.py, line 161, in 
create_network_segments
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource segment = 
self.allocate_tenant_segment(session)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/managers.py, line 190, in 
allocate_tenant_segment
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource segment = 
driver.obj.allocate_tenant_segment(session)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/type_tunnel.py, line 150, 
in allocate_tenant_segment
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource alloc = 
self.allocate_partially_specified_segment(session)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/helpers.py, line 144, in 
allocate_partially_specified_segment
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource raise 
exc.NoNetworkFoundInMaximumAllowedAttempts()
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource 
NoNetworkFoundInMaximumAllowedAttempts: Unable to create the network. No 
available network found in maximum allowed attempts.
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource

  vxlan_vni': 1008L is successfully allocated on behalf of pid=2813 ,
  req-f3866173-7766-46fc-9dea-e5387be7190d.

  pid=2814,req-f6402b6d-de49-4675-a766-b45a6bc99061 tries to allocate
  the same VNI for 10 times without success.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1410854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447490] [NEW] Deletion of instances will be stuck forever if any of deletion hung in 'multipath -r'

2015-04-23 Thread Peter Wang
Public bug reported:

I created about 25 VMs from bootable volumes, after finishing this,
I ran a script to deletion all of them in a very short time.

while what i saw was: all of the VMs were in 'deleting' status and would
never be deleted after waiting for hours

from ps cmd:
stack@ubuntu-server13:/var/log/libvirt$ ps aux | grep multipath
root   8205  0.0  0.0 504988  5560 ?SLl  Apr22   0:01 
/sbin/multipathd
root 115515  0.0  0.0  64968  2144 pts/3S+   Apr22   0:00 sudo 
nova-rootwrap /etc/nova/rootwrap.conf multipath -r
root 115516  0.0  0.0  42240  9488 pts/3S+   Apr22   0:00 
/usr/bin/python /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf multipath 
-r
root 115525  0.0  0.0  41792  2592 pts/3S+   Apr22   0:00 
/sbin/multipath -r
stack151825  0.0  0.0  11744   936 pts/0S+   02:10   0:00 grep 
--color=auto multipath

then i killed the multipath -r commands

all vm ran into ERROR status

after digging into nova code,
nova always trying to  get a global file lock :
@utils.synchronized('connect_volume')
def disconnect_volume(self, connection_info, disk_dev):
Detach the volume from instance_name.
iscsi_properties = connection_info['data']

  ..
  if self.use_multipath and multipath_device:
return self._disconnect_volume_multipath_iscsi(iscsi_properties,
   multipath_device)

and then rescan iscsi by 'multipath -r'

def _disconnect_volume_multipath_iscsi(self, iscsi_properties,
   multipath_device):
self._rescan_iscsi()
self._rescan_multipath()--- self._run_multipath('-r', 
check_exit_code=[0, 1, 21])


In my case, 'multipath -r' hang for a very long time and did not exit for 
serveral hours
in addtion, this block all deletion of VM instances in the same Nova Node

IMO, Nova should not wait the BLOCK command forever, at least, a
timeout is needed for command such as'multipath -r' and 'multipath -ll'

or is there any other solution for my case?


MY ENVIRONMENT:
Ubuntu Server 14:
multipath-tools
multipath enabled in Nova node

Thanks
Peter

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447490

Title:
  Deletion of instances will be stuck forever if any of deletion hung in
  'multipath -r'

Status in OpenStack Compute (Nova):
  New

Bug description:
  I created about 25 VMs from bootable volumes, after finishing this,
  I ran a script to deletion all of them in a very short time.

  while what i saw was: all of the VMs were in 'deleting' status and
  would never be deleted after waiting for hours

  from ps cmd:
  stack@ubuntu-server13:/var/log/libvirt$ ps aux | grep multipath
  root   8205  0.0  0.0 504988  5560 ?SLl  Apr22   0:01 
/sbin/multipathd
  root 115515  0.0  0.0  64968  2144 pts/3S+   Apr22   0:00 sudo 
nova-rootwrap /etc/nova/rootwrap.conf multipath -r
  root 115516  0.0  0.0  42240  9488 pts/3S+   Apr22   0:00 
/usr/bin/python /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf multipath 
-r
  root 115525  0.0  0.0  41792  2592 pts/3S+   Apr22   0:00 
/sbin/multipath -r
  stack151825  0.0  0.0  11744   936 pts/0S+   02:10   0:00 grep 
--color=auto multipath

  then i killed the multipath -r commands

  all vm ran into ERROR status

  after digging into nova code,
  nova always trying to  get a global file lock :
  @utils.synchronized('connect_volume')
  def disconnect_volume(self, connection_info, disk_dev):
  Detach the volume from instance_name.
  iscsi_properties = connection_info['data']

..
if self.use_multipath and multipath_device:
  return self._disconnect_volume_multipath_iscsi(iscsi_properties,
 multipath_device)

  and then rescan iscsi by 'multipath -r'

  def _disconnect_volume_multipath_iscsi(self, iscsi_properties,
 multipath_device):
  self._rescan_iscsi()
  self._rescan_multipath()--- self._run_multipath('-r', 
check_exit_code=[0, 1, 21])

  
  In my case, 'multipath -r' hang for a very long time and did not exit for 
serveral hours
  in addtion, this block all deletion of VM instances in the same Nova Node

  IMO, Nova should not wait the BLOCK command forever, at least, a
  timeout is needed for command such as'multipath -r' and 'multipath
  -ll'

  or is there any other solution for my case?

  
  MY ENVIRONMENT:
  Ubuntu Server 14:
  multipath-tools
  multipath enabled in Nova node

  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1447476] Re: No module named pathlib

2015-04-23 Thread Rob Cresswell
Addressed by https://review.openstack.org/#/c/176584/ (Kilo) and
https://review.openstack.org/#/c/176581/ (Master/ Liberty)

** Changed in: devstack
   Status: New = Invalid

** Changed in: horizon
   Status: New = Fix Released

** Changed in: horizon
Milestone: None = liberty-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447476

Title:
  No module named pathlib

Status in devstack - openstack dev environments:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Trying to install devstack (both L and K)

  Error during stack.sh
  2015-04-23 07:21:55.646 | + cd /opt/stack/horizon
  2015-04-23 07:21:55.647 | + ./run_tests.sh -N --compilemessages
  2015-04-23 07:21:55.927 | WARNING:root:No local_settings file found.
  2015-04-23 07:21:56.523 | Traceback (most recent call last):
  2015-04-23 07:21:56.523 |   File /opt/stack/horizon/manage.py, line 23, in 
module
  2015-04-23 07:21:56.523 | execute_from_command_line(sys.argv)
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 385, in execute_from_command_line
  2015-04-23 07:21:56.523 | utility.execute()
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 354, in execute
  2015-04-23 07:21:56.523 | django.setup()
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/__init__.py, line 21, in setup
  2015-04-23 07:21:56.523 | apps.populate(settings.INSTALLED_APPS)
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/apps/registry.py, line 85, in 
populate
  2015-04-23 07:21:56.523 | app_config = AppConfig.create(entry)
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/apps/config.py, line 87, in 
create
  2015-04-23 07:21:56.523 | module = import_module(entry)
  2015-04-23 07:21:56.523 |   File /usr/lib/python2.7/importlib/__init__.py, 
line 37, in import_module
  2015-04-23 07:21:56.524 | __import__(name)
  2015-04-23 07:21:56.524 |   File 
/usr/local/lib/python2.7/dist-packages/django_pyscss/__init__.py, line 1, in 
module
  2015-04-23 07:21:56.524 | from .compiler import DjangoScssCompiler  # NOQA
  2015-04-23 07:21:56.524 |   File 
/usr/local/lib/python2.7/dist-packages/django_pyscss/compiler.py, line 4, in 
module
  2015-04-23 07:21:56.524 | from pathlib import PurePath
  2015-04-23 07:21:56.524 | ImportError: No module named pathlib
  2015-04-23 07:21:56.548 | + exit_trap
  2015-04-23 07:21:56.548 | + local r=1
  2015-04-23 07:21:56.549 | ++ jobs -p
  2015-04-23 07:21:56.550 | + jobs=
  2015-04-23 07:21:56.550 | + [[ -n '' ]]
  2015-04-23 07:21:56.550 | + kill_spinner
  2015-04-23 07:21:56.550 | + '[' '!' -z '' ']'
  2015-04-23 07:21:56.550 | + [[ 1 -ne 0 ]]
  2015-04-23 07:21:56.550 | + echo 'Error on exit'
  2015-04-23 07:21:56.550 | Error on exit
  2015-04-23 07:21:56.550 | + [[ -z /opt/stack/logs ]]
  2015-04-23 07:21:56.550 | + /opt/devstack/tools/worlddump.py -d 
/opt/stack/logs
  2015-04-23 07:21:56.586 | + exit 1

  Manually pip install pathlib and then retried:
  + cd /opt/stack/horizon
  + ./run_tests.sh -N --compilemessages
  WARNING:root:No local_settings file found.
  Traceback (most recent call last):
File /opt/stack/horizon/manage.py, line 23, in module
  execute_from_command_line(sys.argv)
File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 385, in execute_from_command_line
  utility.execute()
File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 354, in execute
  django.setup()
File /usr/local/lib/python2.7/dist-packages/django/__init__.py, line 21, 
in setup
  apps.populate(settings.INSTALLED_APPS)
File /usr/local/lib/python2.7/dist-packages/django/apps/registry.py, line 
85, in populate
  app_config = AppConfig.create(entry)
File /usr/local/lib/python2.7/dist-packages/django/apps/config.py, line 
87, in create
  module = import_module(entry)
File /usr/lib/python2.7/importlib/__init__.py, line 37, in import_module
  __import__(name)
File /usr/local/lib/python2.7/dist-packages/django_pyscss/__init__.py, 
line 1, in module
  from .compiler import DjangoScssCompiler  # NOQA
File /usr/local/lib/python2.7/dist-packages/django_pyscss/compiler.py, 
line 11, in module
  from scss import Compiler, config
  ImportError: cannot import name Compiler
  + exit_trap
  + local r=1
  ++ jobs -p
  + jobs=
  + [[ -n '' ]]
  + kill_spinner
  + '[' '!' -z '' ']'
  + [[ 1 -ne 0 ]]
  + echo 'Error on exit'
  Error on exit
  + [[ -z /opt/stack/logs ]]
  + /opt/devstack/tools/worlddump.py -d /opt/stack/logs
  World dumping... see /opt/stack/logs/worlddump-2015-04-23-073207.txt for 
details
  + 

[Yahoo-eng-team] [Bug 1447528] [NEW] vnc configuration should be group 'vnc'

2015-04-23 Thread jichenjc
Public bug reported:

vnc  conf is in following format

vnc_opts = [
cfg.StrOpt('novncproxy_base_url',
   default='http://127.0.0.1:6080/vnc_auto.html',
   help='Location of VNC console proxy, in the form '
'http://127.0.0.1:6080/vnc_auto.html;'),
cfg.StrOpt('xvpvncproxy_base_url',
   default='http://127.0.0.1:6081/console',
   help='Location of nova xvp VNC console proxy, in the form '
'http://127.0.0.1:6081/console;'),
cfg.StrOpt('vncserver_listen',
   default='127.0.0.1',
   help='IP address on which instance vncservers should listen'),
cfg.StrOpt('vncserver_proxyclient_address',
   default='127.0.0.1',
   help='The address to which proxy clients '
'(like nova-xvpvncproxy) should connect'),
cfg.BoolOpt('vnc_enabled',
default=True,
help='Enable VNC related features'),
cfg.StrOpt('vnc_keymap',
   default='en-us',
   help='Keymap for VNC'),
]

CONF = cfg.CONF
CONF.register_opts(vnc_opts)

while the others belong to rdp or serial group, we can make vnc in vnc
group like others

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447528

Title:
  vnc configuration should be group 'vnc'

Status in OpenStack Compute (Nova):
  New

Bug description:
  vnc  conf is in following format

  vnc_opts = [
  cfg.StrOpt('novncproxy_base_url',
 default='http://127.0.0.1:6080/vnc_auto.html',
 help='Location of VNC console proxy, in the form '
  'http://127.0.0.1:6080/vnc_auto.html;'),
  cfg.StrOpt('xvpvncproxy_base_url',
 default='http://127.0.0.1:6081/console',
 help='Location of nova xvp VNC console proxy, in the form '
  'http://127.0.0.1:6081/console;'),
  cfg.StrOpt('vncserver_listen',
 default='127.0.0.1',
 help='IP address on which instance vncservers should listen'),
  cfg.StrOpt('vncserver_proxyclient_address',
 default='127.0.0.1',
 help='The address to which proxy clients '
  '(like nova-xvpvncproxy) should connect'),
  cfg.BoolOpt('vnc_enabled',
  default=True,
  help='Enable VNC related features'),
  cfg.StrOpt('vnc_keymap',
 default='en-us',
 help='Keymap for VNC'),
  ]

  CONF = cfg.CONF
  CONF.register_opts(vnc_opts)

  while the others belong to rdp or serial group, we can make vnc in vnc
  group like others

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447476] Re: No module named pathlib

2015-04-23 Thread Timur Sufiev
The same applies for running Horizon tests separate from the rest
OpenStack.

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447476

Title:
  No module named pathlib

Status in devstack - openstack dev environments:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Trying to install devstack (both L and K)

  Error during stack.sh
  2015-04-23 07:21:55.646 | + cd /opt/stack/horizon
  2015-04-23 07:21:55.647 | + ./run_tests.sh -N --compilemessages
  2015-04-23 07:21:55.927 | WARNING:root:No local_settings file found.
  2015-04-23 07:21:56.523 | Traceback (most recent call last):
  2015-04-23 07:21:56.523 |   File /opt/stack/horizon/manage.py, line 23, in 
module
  2015-04-23 07:21:56.523 | execute_from_command_line(sys.argv)
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 385, in execute_from_command_line
  2015-04-23 07:21:56.523 | utility.execute()
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 354, in execute
  2015-04-23 07:21:56.523 | django.setup()
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/__init__.py, line 21, in setup
  2015-04-23 07:21:56.523 | apps.populate(settings.INSTALLED_APPS)
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/apps/registry.py, line 85, in 
populate
  2015-04-23 07:21:56.523 | app_config = AppConfig.create(entry)
  2015-04-23 07:21:56.523 |   File 
/usr/local/lib/python2.7/dist-packages/django/apps/config.py, line 87, in 
create
  2015-04-23 07:21:56.523 | module = import_module(entry)
  2015-04-23 07:21:56.523 |   File /usr/lib/python2.7/importlib/__init__.py, 
line 37, in import_module
  2015-04-23 07:21:56.524 | __import__(name)
  2015-04-23 07:21:56.524 |   File 
/usr/local/lib/python2.7/dist-packages/django_pyscss/__init__.py, line 1, in 
module
  2015-04-23 07:21:56.524 | from .compiler import DjangoScssCompiler  # NOQA
  2015-04-23 07:21:56.524 |   File 
/usr/local/lib/python2.7/dist-packages/django_pyscss/compiler.py, line 4, in 
module
  2015-04-23 07:21:56.524 | from pathlib import PurePath
  2015-04-23 07:21:56.524 | ImportError: No module named pathlib
  2015-04-23 07:21:56.548 | + exit_trap
  2015-04-23 07:21:56.548 | + local r=1
  2015-04-23 07:21:56.549 | ++ jobs -p
  2015-04-23 07:21:56.550 | + jobs=
  2015-04-23 07:21:56.550 | + [[ -n '' ]]
  2015-04-23 07:21:56.550 | + kill_spinner
  2015-04-23 07:21:56.550 | + '[' '!' -z '' ']'
  2015-04-23 07:21:56.550 | + [[ 1 -ne 0 ]]
  2015-04-23 07:21:56.550 | + echo 'Error on exit'
  2015-04-23 07:21:56.550 | Error on exit
  2015-04-23 07:21:56.550 | + [[ -z /opt/stack/logs ]]
  2015-04-23 07:21:56.550 | + /opt/devstack/tools/worlddump.py -d 
/opt/stack/logs
  2015-04-23 07:21:56.586 | + exit 1

  Manually pip install pathlib and then retried:
  + cd /opt/stack/horizon
  + ./run_tests.sh -N --compilemessages
  WARNING:root:No local_settings file found.
  Traceback (most recent call last):
File /opt/stack/horizon/manage.py, line 23, in module
  execute_from_command_line(sys.argv)
File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 385, in execute_from_command_line
  utility.execute()
File 
/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py, 
line 354, in execute
  django.setup()
File /usr/local/lib/python2.7/dist-packages/django/__init__.py, line 21, 
in setup
  apps.populate(settings.INSTALLED_APPS)
File /usr/local/lib/python2.7/dist-packages/django/apps/registry.py, line 
85, in populate
  app_config = AppConfig.create(entry)
File /usr/local/lib/python2.7/dist-packages/django/apps/config.py, line 
87, in create
  module = import_module(entry)
File /usr/lib/python2.7/importlib/__init__.py, line 37, in import_module
  __import__(name)
File /usr/local/lib/python2.7/dist-packages/django_pyscss/__init__.py, 
line 1, in module
  from .compiler import DjangoScssCompiler  # NOQA
File /usr/local/lib/python2.7/dist-packages/django_pyscss/compiler.py, 
line 11, in module
  from scss import Compiler, config
  ImportError: cannot import name Compiler
  + exit_trap
  + local r=1
  ++ jobs -p
  + jobs=
  + [[ -n '' ]]
  + kill_spinner
  + '[' '!' -z '' ']'
  + [[ 1 -ne 0 ]]
  + echo 'Error on exit'
  Error on exit
  + [[ -z /opt/stack/logs ]]
  + /opt/devstack/tools/worlddump.py -d /opt/stack/logs
  World dumping... see /opt/stack/logs/worlddump-2015-04-23-073207.txt for 
details
  + exit 1

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1304333] Re: Instance left stuck in transitional POWERING state

2015-04-23 Thread James Page
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
   Status: New = Fix Released

** Changed in: nova (Ubuntu Trusty)
   Importance: Undecided = Medium

** Changed in: nova (Ubuntu Trusty)
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304333

Title:
  Instance left stuck in transitional POWERING state

Status in OpenStack Compute (Nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Triaged

Bug description:
  If a compute manager is stopped / fails during POWERING-ON or
  POWERING-OFF operations then the instance will be left stuck in a this
  transitional task_state.

  --- --- --- --- --- --- ---

  [Impact]

   * We are backporting this to Icehouse since nova currently fails to resolve
 instance state when service is restarted. It is not expected to impact
 normal operational behaviour in any way.

  [Test Case]

   * Deploy cloud incl. nova-compute and rabbitmq and create some
  instances.

   * Perform actions on those instances that cause state to change

   * Restart nova-compute and once restarted check that nova instances are in
 expected state.

  [Regression Potential]

   * None that I can see. This is hopefully a very low impact patch and I have
 tested in my local cloud environment with successful results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444421] Re: Launch instance fails with nova network

2015-04-23 Thread Thierry Carrez
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None = kilo-rc2

** Changed in: horizon
Milestone: kilo-rc2 = None

** Changed in: horizon/kilo
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/121

Title:
  Launch instance fails with nova network

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Current Launch Instance (not angular launch instance)

  git checkout from kilo rc1:

  I have deployed as system with nova network instead of neutron.

  While trying to launch an instance, I'm getting:
  Internal Server Error: /project/instances/launch
  Traceback (most recent call last):
    File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
164, in get_response
  response = response.render()
    File /usr/lib/python2.7/site-packages/django/template/response.py, line 
158, in render
  self.content = self.rendered_content
    File /usr/lib/python2.7/site-packages/django/template/response.py, line 
135, in rendered_content
  content = template.render(context, self._request)
    File /usr/lib/python2.7/site-packages/django/template/backends/django.py, 
line 74, in render
  return self.template.render(context)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 209, 
in render
  return self._render(context)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 201, 
in _render
  return self.nodelist.render(context)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 903, 
in render
  bit = self.render_node(node, context)
    File /usr/lib/python2.7/site-packages/django/template/debug.py, line 79, 
in render_node
  return node.render(context)
    File /usr/lib/python2.7/site-packages/django/template/defaulttags.py, 
line 576, in render
  return self.nodelist.render(context)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 903, 
in render
  bit = self.render_node(node, context)
    File /usr/lib/python2.7/site-packages/django/template/debug.py, line 79, 
in render_node
  return node.render(context)
    File /usr/lib/python2.7/site-packages/django/template/loader_tags.py, 
line 56, in render
  result = self.nodelist.render(context)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 903, 
in render
  bit = self.render_node(node, context)
    File /usr/lib/python2.7/site-packages/django/template/debug.py, line 79, 
in render_node
  return node.render(context)
    File /usr/lib/python2.7/site-packages/django/template/defaulttags.py, 
line 217, in render
  nodelist.append(node.render(context))
    File /usr/lib/python2.7/site-packages/django/template/defaulttags.py, 
line 322, in render
  match = condition.eval(context)
    File /usr/lib/python2.7/site-packages/django/template/defaulttags.py, 
line 933, in eval
  return self.value.resolve(context, ignore_failures=True)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 647, 
in resolve
  obj = self.var.resolve(context)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 787, 
in resolve
  value = self._resolve_lookup(context)
    File /usr/lib/python2.7/site-packages/django/template/base.py, line 847, 
in _resolve_lookup
  current = current()
    File /home/mrunge/work/horizon/horizon/workflows/base.py, line 439, in 
has_required_fields
  return any(field.required for field in self.action.fields.values())
    File /home/mrunge/work/horizon/horizon/workflows/base.py, line 368, in 
action
  context)
    File 
/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py,
 line 707, in __init__
  super(SetNetworkAction, self).__init__(request, *args, **kwargs)
    File /home/mrunge/work/horizon/horizon/workflows/base.py, line 138, in 
__init__
  self._populate_choices(request, context)
    File /home/mrunge/work/horizon/horizon/workflows/base.py, line 151, in 
_populate_choices
  bound_field.choices = meth(request, context)
    File 
/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py,
 line 721, in populate_network_choices
  return instance_utils.network_field_data(request)
    File 
/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/utils.py,
 line 97, in network_field_data
  if not networks:
  UnboundLocalError: local variable 'networks' referenced before assignment

  Fun fact is, this only occurs, when using admin credentials. with
  user, this doesn't happen.

  The error message shown in horizon is: Error: Invalid service catalog
  service: network

To manage 

[Yahoo-eng-team] [Bug 1442656] Re: live migration fails, because incorrect arguments order passed to method

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442656

Title:
  live migration fails, because incorrect arguments order passed to
  method

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  steps to reproduce:
  1.boot instance
  2. call nova live-migration instance

  2015-04-10 09:03:42.621 ERROR root [req-73637bbe-9a23-4ce9-92c8-d3bbceaf8341 
admin admin] Original exception being dropped: ['Traceback (most recent call 
last):\n', '  File /opt/stack/nova/nova/compute/manager.py, line 340, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File /opt/stack/nova/nova/compute/manager.py, line 5202, in live_migration\n  
  expected_attrs=expected)\n', '  File 
/opt/stack/nova/nova/objects/instance.py, line 489, in _from_db_object\n
instance[field] = db_inst[field]\n', 'TypeError: string indices must be 
integers\n']
  2015-04-10 09:03:42.622 ERROR oslo_messaging.rpc.dispatcher 
[req-73637bbe-9a23-4ce9-92c8-d3bbceaf8341 admin admin] Exception during message 
handling: 'unicode' object has no attribute 'uuid'
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 6663, in live_migration
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher 
migrate_data=migrate_data)
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 88, in wrapped
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher payload)
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 71, in wrapped
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 352, in decorated_function
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/utils.py, line 87, in add_instance_fault_from_exc
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher 
fault_obj.instance_uuid = instance.uuid
  2015-04-10 09:03:42.622 TRACE oslo_messaging.rpc.dispatcher AttributeError: 
'unicode' object has no attribute 'uuid'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443375] Re: Version API doesn't return microversion info

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443375

Title:
  Version API doesn't return microversion info

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Now Nova's version API doesn't return microversion info.

  As nova-spec api-microversions, versions API needs to expose minimum and 
maximum microversions to version API, because clients need to know supported 
features through the API.
  That is very important for the interoperability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444021] Re: HostState.consume_from_instance fails when instance has numa topology

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444021

Title:
  HostState.consume_from_instance fails when instance has numa topology

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:

  The consume_from_instance method will throw an exception if the instance has 
numa topology defined. 
  This happens because  'requests' are not retrieved from pci_requests.

  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py, line 142, in 
inner
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/manager.py, line 86, in select_destinations
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/filter_scheduler.py, line 67, in 
select_destinations
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/filter_scheduler.py, line 163, in _schedule
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher 
chosen_host.obj.consume_from_instance(instance_properties)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/host_manager.py, line 280, in 
consume_from_instance
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher 
pci_requests=pci_requests, pci_stats=self.pci_stats)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/hardware.py, line 1034, in numa_fit_instance_to_host
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher cells)):
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/pci/stats.py, line 222, in support_requests
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher for r in 
requests])
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/pci/stats.py, line 196, in _apply_request
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher 
matching_pools = self._filter_pools_for_spec(pools, request.spec)
  2015-04-14 14:41:39.243 TRACE oslo_messaging.rpc.dispatcher AttributeError: 
'unicode' object has no attribute 'spec'

  If the instance has pci_requests this may not work either as apply_requests 
will remove pci devices from the pool so 
  numa_fit_instance_to_host may fail (because there are no devices left)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444300] Re: nova-compute service doesn't restart if resize operation fails

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444300

Title:
  nova-compute service doesn't restart if resize operation fails

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  If instance is resizing and user tries to delete the instance, in that
  case instance gets deleted successfully. After instance deletion, greenthread 
which was resizing the instance raises InstanceNotFound error, which caught in 
errors_out_migration and raises KeyError: 'migration' .

  Now if user tries to restart the n-cpu service, it fails with
  InstanceNotFound error.

  Steps to reproduce:
  1. Create instance
  2. Resize instance
  3. Delete instance while resize is in progress (scp/rsync process is running)
  4. Instance is deleted successfully and instance files are cleaned from 
source compute node
  5. When scp/rsync process completes it throws error InstanceNotFound and 
later the migration status remains in 'migrating' status. After catching 
InstanceNotFound error in _errors_out_migration decorator, it throws KeyError: 
'migration' from errors_out_migration decorator, where migration is expected 
to be a kwargs, but it is passed as args.
  It throws below error:

  2015-04-14 23:29:12.466 ERROR nova.compute.manager 
[req-2b4e3718-a1fa-4603-bd9e-6c9481f75e16 demo demo] [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] Setting instance vm_state to ERROR
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] Traceback (most recent call last):
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/opt/stack/nova/nova/compute/manager.py, line 6358, in 
_error_out_instance_on_exception
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] yield
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/opt/stack/nova/nova/compute/manager.py, line 3984, in resize_instance
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] timeout, retry_interval)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 6318, in 
migrate_disk_and_power_off
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] shared_storage)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] six.reraise(self.type_, self.value, 
self.tb)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 6313, in 
migrate_disk_and_power_off
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] libvirt_utils.copy_image(from_path, 
img_path, host=dest)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/opt/stack/nova/nova/virt/libvirt/utils.py, line 327, in copy_image
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] execute('scp', src, dest)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/opt/stack/nova/nova/virt/libvirt/utils.py, line 55, in execute
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] return utils.execute(*args, **kwargs)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File /opt/stack/nova/nova/utils.py, 
line 206, in execute
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] return processutils.execute(*cmd, 
**kwargs)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py, line 
238, in execute
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] cmd=sanitized_cmd)
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] ProcessExecutionError: Unexpected error 
while running command.
  2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] 

[Yahoo-eng-team] [Bug 1442602] Re: live migration fails during destination host check

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442602

Title:
  live migration fails during destination host check

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  steps to reproduce:
  1.boot instance
  2. call nova live-migration instance

  during live-migration rpc method call check_can_live_migrate_destination() 
fails with:
  2015-04-10 06:45:37.706 ERROR oslo_messaging.rpc.dispatcher 
[req-9a4b9986-2cfc-4c57-8622-ef6f84e3dfd2 admin admin] Exception during message 
handling: check_can_live_migrate_destination() takes exactly 6 arguments (5 
given)
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher TypeError: 
check_can_live_migrate_destination() takes exactly 6 arguments (5 given)
  2015-04-10 06:45:37.706 TRACE oslo_messaging.rpc.dispatcher 
  2015-04-10 06:45:37.708 ERROR oslo_messaging._drivers.common 
[req-9a4b9986-2cfc-4c57-8622-ef6f84e3dfd2 admin admin] Returning exception 
check_can_live_migrate_destination() takes exactly 6 arguments (5 given) to 
caller
  2015-04-10 06:45:37.708 ERROR oslo_messaging._drivers.common 
[req-9a4b9986-2cfc-4c57-8622-ef6f84e3dfd2 admin admin] ['Traceback (most recent 
call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply\nexecutor_callback))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch\nexecutor_callback)\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', 'TypeError: 
check_can_live_migrate_destination() takes exactly 6 arguments (5 given)\n']

  
  in nova/compute/manager.py class _ComputeV4Proxy has method:
  def check_can_live_migrate_destination(self, ctxt, instance, destination,
 block_migration, disk_over_commit):
  return self.manager.check_can_live_migrate_destination(
  ctxt, instance, destination, block_migration, disk_over_commit)

  here is signature of ComputeManager's class method:
   def check_can_live_migrate_destination(self, ctxt, instance,
 block_migration, disk_over_commit):

  There is no 'destination' param in it. It should be removed from
  rpcapi, or added to implementation. _ComputeV4Proxy was added by patch
  ebfa09fa197a1d88d1b3ab1f308232c3df7dc009

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445335] Re: create/delete flavor permissions should be controlled by policy.json

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445335

Title:
  create/delete flavor permissions should be controlled by policy.json

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Invalid

Bug description:
  The create/delete flavor rest api always expects the user to be of
  admin privileges and ignores the rule defined in the nova/policy.json.
  This behavior is observed after these changes 
  https://review.openstack.org/#/c/150352/.

  The expected behavior is that the permissions are controlled as per
  the rule defined in the policy file and should not mandate that only
  an admin should be able to create/delete a flavor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444439] Re: Resource tracker: unable to start nova compute

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/139

Title:
  Resource tracker: unable to start nova compute

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  After a failure of the resize and a deletion of the instance. I am
  unable to restart the nova compute due to the exception below. The
  instance was deleted via nova api.

  The DB is as follows:
  mysql select * from migrations;
  
+-+-+++--+--+---++--+--+--+--+--+-+
  | created_at  | updated_at  | deleted_at | id | 
source_compute   | dest_compute | dest_host | status | 
instance_uuid| old_instance_type_id | 
new_instance_type_id | source_node  | dest_node| deleted |
  
+-+-+++--+--+---++--+--+--+--+--+-+
  | 2015-04-15 09:44:02 | 2015-04-15 09:44:08 | NULL   |  1 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | post-migrating | 
42264e24-1385-41f1-8dfc-120a1891ab05 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  | 2015-04-15 09:48:13 | 2015-04-15 10:19:48 | NULL   |  2 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | reverted   | 
fcab4bde-d93e-4d79-ae35-9d1306da10a4 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  | 2015-04-15 10:23:56 | 2015-04-15 10:24:03 | NULL   |  3 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | post-migrating | 
d074bbc0-b912-4c85-a02b-aabf56d45f0b |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  | 2015-04-15 10:27:45 | 2015-04-15 10:28:16 | NULL   |  4 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | reverted   | 
21e59c96-fa2f-45e3-9070-e982a2dafea6 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  | 2015-04-15 10:28:43 | 2015-04-15 10:29:16 | NULL   |  5 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | confirming | 
21e59c96-fa2f-45e3-9070-e982a2dafea6 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  | 2015-04-15 10:35:15 | 2015-04-15 10:53:16 | NULL   |  6 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | confirmed  | 
4abd75b5-bb91-4ce7-a928-2a96941ea9cb |   10 |   
14 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  | 2015-04-15 10:35:39 | 2015-04-15 10:53:17 | NULL   |  7 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | confirmed  | 
5e01bddb-3978-4f6f-a4d3-6d24ed31afa4 |   14 |   
10 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  | 2015-04-15 10:55:01 | 2015-04-15 10:55:02 | NULL   |  8 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | migrating  | 
20017567-5c83-4918-b269-525169009026 |   10 |   
15 | domain-c167(DVS) | domain-c167(DVS) |   0 |
  
+-+-+++--+--+---++--+--+--+--+--+-+
  8 rows in set (0.00 sec)

  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 145, in wait
  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup x.wait()
  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 47, in wait
  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 175, in 
wait
  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 121, in wait
  2015-04-15 04:47:04.821 TRACE 

[Yahoo-eng-team] [Bug 1438238] Re: Several concurent scheduling requests for CPU pinning may fail due to racy host_state handling

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438238

Title:
  Several concurent scheduling requests for CPU pinning may fail due to
  racy host_state handling

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The issue happens when multiple scheduling attempts that request CPU pinning 
are done in parallel.
   

  015-03-25T14:18:00.222 controller-0 nova-scheduler err Exception
  during message handling: Cannot pin/unpin cpus [4] from the following
  pinned set [3, 4, 5, 6, 7, 8, 9]

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  Traceback (most recent call last):

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File /usr/lib64/python2.7/site-
  packages/oslo/messaging/rpc/dispatcher.py, line 134, in
  _dispatch_and_reply

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  incoming.message))

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File /usr/lib64/python2.7/site-
  packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  return self._do_dispatch(endpoint, method, ctxt, args)

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File /usr/lib64/python2.7/site-
  packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  result = getattr(endpoint, method)(ctxt, **new_args)

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File /usr/lib64/python2.7/site-
  packages/oslo/messaging/rpc/server.py, line 139, in inner

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  return func(*args, **kwargs)

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File ./usr/lib64/python2.7/site-packages/nova/scheduler/manager.py,
  line 86, in select_destinations

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File ./usr/lib64/python2.7/site-
  packages/nova/scheduler/filter_scheduler.py, line 80, in
  select_destinations

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File ./usr/lib64/python2.7/site-
  packages/nova/scheduler/filter_scheduler.py, line 241, in _schedule

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File ./usr/lib64/python2.7/site-
  packages/nova/scheduler/host_manager.py, line 266, in
  consume_from_instance

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File ./usr/lib64/python2.7/site-packages/nova/virt/hardware.py, line
  1472, in get_host_numa_usage_from_instance

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File ./usr/lib64/python2.7/site-packages/nova/virt/hardware.py, line
  1344, in numa_usage_from_instances

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  File ./usr/lib64/python2.7/site-packages/nova/objects/numa.py, line
  91, in pin_cpus

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher
  CPUPinningInvalid: Cannot pin/unpin cpus [4] from the following pinned
  set [3, 4, 5, 6, 7, 8, 9]

  2015-03-25 14:18:00.221 34127 TRACE oslo.messaging.rpc.dispatcher

  What is likely happening is:

  * nova-scheduler is handling several RPC calls to select_destinations
  at the same time, in multiple greenthreads

  * greenthread 1 runs the NUMATopologyFilter and selects a cpu on a
  particular compute node, updating host_state.instance_numa_topology

  * greenthread 1 then blocks for some reason

  * greenthread 2 runs the NUMATopologyFilter and selects the same cpu
  on the same compute node, updating host_state.instance_numa_topology.
  This also seems like an issue if a different cpu was selected, as it
  would be overwriting the instance_numa_topology selected by
  greenthread 1.

  * greenthread 2 then blocks for some reason

  * greenthread 1 gets scheduled and calls consume_from_instance, which
  consumes the numa resources based on what is in
  host_state.instance_numa_topology

  *  greenthread 1 completes the scheduling operation

  * greenthread 2 gets scheduled and calls consume_from_instance, which
  consumes the numa resources based on what is in
  host_state.instance_numa_topology - since the resources were already
  consumed by greenthread 1, we get the exception above

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444630] Re: nova-compute should stop handling virt lifecycle events when it's shutting down

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444630

Title:
  nova-compute should stop handling virt lifecycle events when it's
  shutting down

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This is a follow on to bug 1293480 and related to bug 1408176 and bug
  1443186.

  There can be a race when rebooting a compute host where libvirt is
  shutting down guest VMs and sending STOPPED lifecycle events up to
  nova compute which then tries to stop them via the stop API, which
  sometimes works and sometimes doesn't - the compute service can go
  down with a vm_state of ACTIVE and task_state of powering-off which
  isn't resolve on host reboot.

  Sometimes the stop API completes and the instance is stopped with
  power_state=4 (shutdown) in the nova database.  When the host comes
  back up and libvirt restarts, it starts up the guest VMs which sends
  the STARTED lifecycle event and nova handles that but because the
  vm_state in the nova database is STOPPED and the power_state is 1
  (running) from the hypervisor, nova things it started up unexpectedly
  and stops it:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  So nova shuts the running guest down.

  Actually the block in:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  conflicts with the statement in power_state.py:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/power_state.py?id=2015.1.0rc1#n19

  The hypervisor is always considered the authority on the status
  of a particular VM, and the power_state in the DB should be viewed as a
  snapshot of the VMs's state in the (recent) past.

  Anyway, that's a different issue but the point is when nova-compute is
  shutting down it should stop accepting lifecycle events from the
  hypervisor (virt driver code) since it can't really reliably act on
  them anyway - we can leave any sync up that needs to happen in
  init_host() in the compute manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441169] Re: can't schedule vm with numa topology and pci device

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441169

Title:
  can't schedule vm with numa topology and pci device

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  NUMATopologyFilter will always return 0 hosts for a VM that has a numa
  topology defined and has requested a pci device.

  
  this happens because pci numa_node information is converted to string in 
PciDevicePool, 
  PciDeviceStats expects numa_node info to be an int.   

  
  2015-04-07 14:08:51.399 DEBUG nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Starting with 1 host(s) 
from (pid=47627) get_filtered_objects /shared/stack/nova/nova/filters.py:70
  2015-04-07 14:08:51.399 DEBUG nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Filter RamFilter returned 
1 host(s) from (pid=47627) get_filtered_objects 
/shared/stack/nova/nova/filters.py:84
  2015-04-07 14:08:51.399 DEBUG nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Filter ComputeFilter 
returned 1 host(s) from (pid=47627) get_filtered_objects 
/shared/stack/nova/nova/filters.py:84
  2015-04-07 14:08:51.400 DEBUG nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Filter 
AvailabilityZoneFilter returned 1 host(s) from (pid=47627) get_filtered_objects 
/shared/stack/nova/nova/filters.py:84
  2015-04-07 14:08:51.400 DEBUG nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Filter 
ComputeCapabilitiesFilter returned 1 host(s) from (pid=47627) 
get_filtered_objects /shared/stack/nova/nova/filters.py:84
  2015-04-07 14:08:51.400 DEBUG nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Filter 
ImagePropertiesFilter returned 1 host(s) from (pid=47627) get_filtered_objects 
/shared/stack/nova/nova/filters.py:84
  2015-04-07 14:08:51.400 DEBUG nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Filter 
PciPassthroughFilter returned 1 host(s) from (pid=47627) get_filtered_objects 
/shared/stack/nova/nova/filters.py:84
  2015-04-07 14:08:53.348 INFO nova.filters 
[req-d417e042-2d61-4fc5-a38b-8898f4f512d0 admin demo] Filter NUMATopologyFilter 
returned 0 hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444728] Re: KeyError: 'uuid' trace in n-cpu logs when logging with instance=instance kwarg

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444728

Title:
  KeyError: 'uuid' trace in n-cpu logs when logging with
  instance=instance kwarg

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in Logging configuration library for OpenStack:
  Invalid

Bug description:
  This is with oslo.log 1.0.0:

  http://logs.openstack.org/83/172083/7/gate/gate-tempest-dsvm-neutron-
  full/bdb2cf0/logs/screen-n-cpu.txt.gz#_2015-04-15_13_49_06_485

  Traceback (most recent call last):
File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
182, in format
  % instance)
  KeyError: 'uuid'

  The change was made here: https://review.openstack.org/#/c/160007/

  Looks like that should be formatting with a dict using the uuid key.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445040] Re: InstancePCIRequests.obj_from_db fails to get requests from db

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445040

Title:
  InstancePCIRequests.obj_from_db fails to get requests from db

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  InstancePCIRequests.obj_from_db assumes  db_requests is a dict of
  whole row values from instance_extra table

  
https://github.com/openstack/nova/blob/d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0/nova/objects/instance_pci_requests.py#L83

  
  but when called from Instance._from_db_object 

  
https://github.com/openstack/nova/blob/d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0/nova/objects/instance.py#L507

  It gets only the value of pci_requests column .

  As a result  when loading instance from DB pci_requests won't be
  populated with the values from db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440968] Re: AttributeError: 'module' object has no attribute 'DatastorePath'

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440968

Title:
  AttributeError: 'module' object has no attribute 'DatastorePath'

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The following traceback is seen when resizing ephemeral disks as part
  of tempest in VMware NSX CI (Minesweeper) :-

  2015-04-06 18:15:02.268 ERROR nova.compute.manager 
[req-038752a5-a600-4ba4-a689-88069496b2d9 MigrationsAdminTest-764545366 
MigrationsAdminTest-39143519] [instance: 7ce85d70-7f8a-4387-b5f6-6c4119322fd5] 
Setting instance vm_state to ERROR
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] Traceback (most recent call last):
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]   File 
/opt/stack/nova/nova/compute/manager.py, line 4091, in finish_resize
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] disk_info, image)
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]   File 
/opt/stack/nova/nova/compute/manager.py, line 4057, in _finish_resize
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] old_instance_type)
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] six.reraise(self.type_, self.value, 
self.tb)
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]   File 
/opt/stack/nova/nova/compute/manager.py, line 4052, in _finish_resize
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] block_device_info, power_on)
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 299, in finish_migration
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] block_device_info, power_on)
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]   File 
/opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 1264, in finish_migration
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] 
self._resize_create_ephemerals(vm_ref, instance, block_device_info)
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]   File 
/opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 1146, in 
_resize_create_ephemerals
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] folder = 
ds_util.DatastorePath.parse(vmdk.path).dirname
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5] AttributeError: 'module' object has no 
attribute 'DatastorePath'
  2015-04-06 18:15:02.268 30325 TRACE nova.compute.manager [instance: 
7ce85d70-7f8a-4387-b5f6-6c4119322fd5]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 984996] Re: Instance directory does not exist: Unable to pre-create chardev file console.log

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/984996

Title:
  Instance directory does not exist: Unable to pre-create chardev file
  console.log

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When asking for reboot of an instance, I see the following backtrace
  in nova-compute.log (Nova 2012.1).

  
  2012-04-18 12:29:33 TRACE nova.rpc.amqp Traceback (most recent call last):
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 252, in _process_data
  2012-04-18 12:29:33 TRACE nova.rpc.amqp rval = node_func(context=ctxt, 
**node_args)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
  2012-04-18 12:29:33 TRACE nova.rpc.amqp return f(*args, **kw)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 153, in 
decorated_function
  2012-04-18 12:29:33 TRACE nova.rpc.amqp function(self, context, 
instance_uuid, *args, **kwargs)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in 
decorated_function
  2012-04-18 12:29:33 TRACE nova.rpc.amqp sys.exc_info())
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2012-04-18 12:29:33 TRACE nova.rpc.amqp self.gen.next()
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 171, in 
decorated_function
  2012-04-18 12:29:33 TRACE nova.rpc.amqp return function(self, context, 
instance_uuid, *args, **kwargs)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 898, in 
reboot_instance
  2012-04-18 12:29:33 TRACE nova.rpc.amqp reboot_type)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
  2012-04-18 12:29:33 TRACE nova.rpc.amqp return f(*args, **kw)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 707, 
in reboot
  2012-04-18 12:29:33 TRACE nova.rpc.amqp return 
self._hard_reboot(instance, network_info)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 762, 
in _hard_reboot
  2012-04-18 12:29:33 TRACE nova.rpc.amqp self._create_new_domain(xml)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 1557, 
in _create_new_domain
  2012-04-18 12:29:33 TRACE nova.rpc.amqp 
domain.createWithFlags(launch_flags)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 547, in createWithFlags
  2012-04-18 12:29:33 TRACE nova.rpc.amqp if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
  2012-04-18 12:29:33 TRACE nova.rpc.amqp libvirtError: Unable to pre-create 
chardev file '/var/lib/nova/instances/instance-047b/console.log': No such 
file or directory
  2012-04-18 12:29:33 TRACE nova.rpc.amqp

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/984996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429093] Re: nova allows to boot images with virtual size root_gb specified in flavor

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429093

Title:
  nova allows to boot images with virtual size  root_gb specified in
  flavor

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  It's currently possible to boot an instance from a QCOW2 image, which
  has the virtual size larger than root_gb size specified in the given
  flavor.

  Steps to reproduce:

  1. Download a QCOW2 image (e.g. Cirros -
  https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img)

  2. Resize the image to a reasonable size:

  qemu-img resize cirros-0.3.0-i386-disk.img +9G

  3. Upload the image to Glance:

  glance image-create --file cirros-0.3.0-i386-disk.img --name cirros-
  10GB --is-public True --progress --container-format bare --disk-format
  qcow2

  4. Boot the first VM using a 'correct' flavor (root_gb  virtual size
  of the Cirros image), e.g. m1.small (root_gb = 20)

  nova boot --image cirros-10GB --flavor m1.small demo-ok

  5. Wait until the VM boots.

  6. Boot the second VM using an 'incorrect' flavor (root_gb  virtual
  size of the Cirros image), e.g. m1.tiny (root_gb = 1):

  nova boot --image cirros-10GB --flavor m1.tiny demo-should-fail

  7. Wait until the VM boots.

  Expected result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ERROR state (failed with FlavorDiskTooSmall)

  Actual result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ACTIVE state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433609] Re: Not adding a image block device mapping causes some valid boot requests to fail

2015-04-23 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433609

Title:
  Not adding a image block device mapping causes some valid boot
  requests to fail

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The following commit removed the code in the python nova client that
  would add an image block device mapping entry (source_type: image,
  destination_type: local) in preparation for fixing
  https://bugs.launchpad.net/nova/+bug/1377958.

  However this makes some valid instance boot requests not work as
  expected as they will not pass the block device mapping validation
  because of this. An example would be:

  nova boot test-vm --flavor m1.medium --image centos-vm-32 --nic net-
  id=c3f40e33-d535-4217-916b-1450b8cd3987 --block-device
  id=26b7b917-2794-452a-
  95e5-2efb2ca6e32d,bus=sata,source=volume,bootindex=1

  Which would be a valid boot request previously since the client would
  add a block device with boot_index=0 that would not fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1433609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434103] Re: SQL schema downgrades are no longer supported

2015-04-23 Thread yuntongjin
** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
 Assignee: (unassigned) = yuntongjin (yuntongjin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434103

Title:
  SQL schema downgrades are no longer supported

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Magnum - Containers for OpenStack:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Data Processing (Sahara):
  Fix Released
Status in Openstack Database (Trove):
  New

Bug description:
  Approved cross-project spec: https://review.openstack.org/152337

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1434103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415566] Re: Improve nova.conf configuration file with full content from upstream

2015-04-23 Thread David
also affected. If a default file exists can you please point in that
direction.

The keystone.conf and glance.conf files have been very useful so far.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415566

Title:
  Improve nova.conf configuration file with full content from upstream

Status in OpenStack Compute (Nova):
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  Working through the documentation (in this case, specifically
  http://docs.openstack.org/juno/install-guide/install/apt/openstack-
  install-guide-apt-juno.pdf), most all of the config files reflect the
  things that need tweaking, with sections carefully defined.  For
  nova.conf, there is mention of [database], [glance], and
  [keystone_authtoken] sections, none of which exist.  After asking the
  OpenStack list, they asked me to open a bug against the file, so that
  it could be a bit better fleshed-out, and do a better job as a
  template.

  Thanks!

  -Ken

  Oh -- this is Ubuntu 14.04, against the OpenStack repo called out int
  he documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447884] [NEW] Boot from volume, block device allocate timeout cause VM error, but volume would be available later

2015-04-23 Thread Lan Qi song
Public bug reported:

When we  try to boot multi instances from volume (with a  large image
source)  at the same time,  we usually got a block device allocate error
as the logs in nova-compute.log:

2015-03-30 23:22:46.920 6445 WARNING nova.compute.manager [-] Volume id: 
551ea616-e1c4-4ef2-9bf3-b0ca6d4474dc finished being created but was not set as 
'available'
2015-03-30 23:22:47.131 6445 ERROR nova.compute.manager [-] [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] Instance failed block device setup
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] Traceback (most recent call last):
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1829, in 
_prep_block_device
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] do_check_attach=do_check_attach) +
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 407, in 
attach_block_devices
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] map(_log_and_attach, 
block_device_mapping)
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 405, in 
_log_and_attach
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] bdm.attach(*attach_args, 
**attach_kwargs)
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 339, in 
attach
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] do_check_attach=do_check_attach)
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 46, in 
wrapped
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] ret_val = method(obj, context, *args, 
**kwargs)
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 229, in 
attach
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] volume_api.check_attach(context, 
volume, instance=instance)
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/volume/cinder.py, line 305, in 
check_attach
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] raise 
exception.InvalidVolume(reason=msg)
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] InvalidVolume: Invalid volume: status 
must be 'available'
2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]

This error cause the VM in error status:
+--+++--+-+--+
| ID   | Name   | Status | Task State   
| Power State | Networks |
+--+++--+-+--+
| 1fa2d7aa-8bd9-4a22-8538-0a07d9dae8aa | inst02 | ERROR  | 
block_device_mapping | NOSTATE |  |
+--+++--+-+--+
But the volume was in available status:
---+
|  ID  |   Status  | Name | Size | Volume Type 
| Bootable | Attached to  |
+--+---+--+--+-+--+--+
| a9ab2dc2-b117-44ef-8678-f71067a9e770 | available | None |  2   | None
|   true   |  |
+--+---+--+--+-+--+--+
+--+++--+-+---


And when we teminiate this VM, the volume will still exist,  since there is no 
volume attachment info stored in this VM.

This can be easily reproduced:
1. add the following options  to nova.conf  in compute node ( make sure the 
error 

[Yahoo-eng-team] [Bug 1447883] [NEW] Restrict netmask of CIDR to avoid DHCP resync is not enough

2015-04-23 Thread Yushiro FURUKAWA
Public bug reported:

Restrict netmask of CIDR to avoid DHCP resync  is not enough.
https://bugs.launchpad.net/neutron/+bug/1443798

I'd like to prevent following case:

[Condition]
  - Plugin: ML2
  - subnet with enable_dhcp is True

[Operations]
A. Specify [](empty list) at allocation_pools when create/update-subnet
---
$ $ curl -X POST -d '{subnet: {name: test_subnet, cidr: 
192.168.200.0/24, ip_version: 4, network_id: 
649c5531-338e-42b5-a2d1-4d49140deb02, allocation_pools: []}}' -H 
x-auth-token:$TOKEN -H content-type:application/json 
http://127.0.0.1:9696/v2.0/subnets

Then, the dhcp-agent creates own DHCP-port, it is reproduced resync bug.

B. Create port and exhaust allocation_pools
---
1. Create subnet with 192.168.1.0/24. And, DHCP-port has alteady created.
   gateway_ip: 192.168.1.1
   DHCP-port: 192.168.1.2
   allocation_pools{start: 192.168.1.2, end: 192.168.1.254}
   the number of availability ip_addresses is 252.

2. Create non-dhcp port and exhaust ip_addresses in allocation_pools
   In this case, user creates a port 252 times.
   the number of availability ip_addresses is 0.

3. User deletes the DHCP-port(192.168.1.2)
   the number of availability ip_addresses is 1.

4. User creates a non-dhcp port.
   the number of availability ports are 0.
   Then, dhcp-agent tries to create DHCP-port. It is reproduced resync bug.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447883

Title:
  Restrict netmask of CIDR to avoid DHCP resync is not enough

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Restrict netmask of CIDR to avoid DHCP resync  is not enough.
  https://bugs.launchpad.net/neutron/+bug/1443798

  I'd like to prevent following case:

  [Condition]
- Plugin: ML2
- subnet with enable_dhcp is True

  [Operations]
  A. Specify [](empty list) at allocation_pools when create/update-subnet
  ---
  $ $ curl -X POST -d '{subnet: {name: test_subnet, cidr: 
192.168.200.0/24, ip_version: 4, network_id: 
649c5531-338e-42b5-a2d1-4d49140deb02, allocation_pools: []}}' -H 
x-auth-token:$TOKEN -H content-type:application/json 
http://127.0.0.1:9696/v2.0/subnets

  Then, the dhcp-agent creates own DHCP-port, it is reproduced resync
  bug.

  B. Create port and exhaust allocation_pools
  ---
  1. Create subnet with 192.168.1.0/24. And, DHCP-port has alteady created.
 gateway_ip: 192.168.1.1
 DHCP-port: 192.168.1.2
 allocation_pools{start: 192.168.1.2, end: 192.168.1.254}
 the number of availability ip_addresses is 252.

  2. Create non-dhcp port and exhaust ip_addresses in allocation_pools
 In this case, user creates a port 252 times.
 the number of availability ip_addresses is 0.

  3. User deletes the DHCP-port(192.168.1.2)
 the number of availability ip_addresses is 1.

  4. User creates a non-dhcp port.
 the number of availability ports are 0.
 Then, dhcp-agent tries to create DHCP-port. It is reproduced resync bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp