[Yahoo-eng-team] [Bug 1399098] [NEW] qemu-img convert should be skipped when migrating.

2014-12-04 Thread Hiroyuki Eguchi
Public bug reported:

Currently, qemu image is always converted when resizing and migrating.

I guess this process is to prevent a original disk image corrupting when 
expanding the disk size.
So qemu-img convert should be skipped when migrating and resizing
(when the disk image of new flavor is same the old one).

The qemu-img convert is IO heavy process, so I'd like to skip, if
possible.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399098

Title:
  qemu-img convert should be skipped when migrating.

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently, qemu image is always converted when resizing and migrating.

  I guess this process is to prevent a original disk image corrupting when 
expanding the disk size.
  So qemu-img convert should be skipped when migrating and resizing
  (when the disk image of new flavor is same the old one).

  The qemu-img convert is IO heavy process, so I'd like to skip, if
  possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399114] [NEW] when delete the lb vip, the tap device not be deleted

2014-12-04 Thread yangzhenyu
Public bug reported:

Hi all,
  When I delete the lb vip which is ERROR status, the lbaas namespace tap 
device not be delete. so when I add a new vip used the same ip address, then It 
can not access. Because the ip confilict.

   My neutron version is icehouse.

** Affects: neutron
 Importance: Undecided
 Assignee: yangzhenyu (cdyangzhenyu)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yangzhenyu (cdyangzhenyu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399114

Title:
  when delete the lb vip, the tap device not be deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi all,
When I delete the lb vip which is ERROR status, the lbaas namespace tap 
device not be delete. so when I add a new vip used the same ip address, then It 
can not access. Because the ip confilict.

 My neutron version is icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399127] [NEW] Hyper-V: copy_vm_console_logs does not behave as expected

2014-12-04 Thread Claudiu Belu
Public bug reported:

The method nova.virt.hyperv.vmops.VMOps.copy_vm_console_logs does not
behave as expected. For example,  it should copy the local files
'local.file', 'local.file.1' to the remote locations 'remote.file',
'remote.file.1' respectively. Instead it copies 'local.file' to
'local.file.1' and 'remote.file' to 'remote.file.1'.

This issue was discovered while creating unit tests:
https://review.openstack.org/#/c/138934/

Trace:

2014-12-04 08:25:51.623 | Traceback (most recent call last):
2014-12-04 08:25:51.624 |   File 
"nova/tests/unit/virt/hyperv/test_vmops.py", line 868, in 
test_copy_vm_console_logs
2014-12-04 08:25:51.624 | mock.sentinel.FAKE_PATH, 
mock.sentinel.FAKE_REMOTE_PATH)
2014-12-04 08:25:51.624 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 846, in assert_called_once_with
2014-12-04 08:25:51.624 | return self.assert_called_with(*args, 
**kwargs)
2014-12-04 08:25:51.625 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 835, in assert_called_with
2014-12-04 08:25:51.625 | raise AssertionError(msg)
2014-12-04 08:25:51.625 | AssertionError: Expected call: 
copy(sentinel.FAKE_PATH, sentinel.FAKE_REMOTE_PATH)
2014-12-04 08:25:51.626 | Actual call: copy(sentinel.FAKE_PATH, 
sentinel.FAKE_PATH_ARCHIVED)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v juno-backport-potential

** Tags added: juno-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399127

Title:
  Hyper-V: copy_vm_console_logs does not behave as expected

Status in OpenStack Compute (Nova):
  New

Bug description:
  The method nova.virt.hyperv.vmops.VMOps.copy_vm_console_logs does not
  behave as expected. For example,  it should copy the local files
  'local.file', 'local.file.1' to the remote locations 'remote.file',
  'remote.file.1' respectively. Instead it copies 'local.file' to
  'local.file.1' and 'remote.file' to 'remote.file.1'.

  This issue was discovered while creating unit tests:
  https://review.openstack.org/#/c/138934/

  Trace:

  2014-12-04 08:25:51.623 | Traceback (most recent call last):
  2014-12-04 08:25:51.624 |   File 
"nova/tests/unit/virt/hyperv/test_vmops.py", line 868, in 
test_copy_vm_console_logs
  2014-12-04 08:25:51.624 | mock.sentinel.FAKE_PATH, 
mock.sentinel.FAKE_REMOTE_PATH)
  2014-12-04 08:25:51.624 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 846, in assert_called_once_with
  2014-12-04 08:25:51.624 | return self.assert_called_with(*args, 
**kwargs)
  2014-12-04 08:25:51.625 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 835, in assert_called_with
  2014-12-04 08:25:51.625 | raise AssertionError(msg)
  2014-12-04 08:25:51.625 | AssertionError: Expected call: 
copy(sentinel.FAKE_PATH, sentinel.FAKE_REMOTE_PATH)
  2014-12-04 08:25:51.626 | Actual call: copy(sentinel.FAKE_PATH, 
sentinel.FAKE_PATH_ARCHIVED)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399126] [NEW] itentity panel link in footer

2014-12-04 Thread Jiri Tomasek
Public bug reported:

There is a link in the header that points to the identity panel. This breaks 
the plugin mechanism as if Identity panel is disabled, Horizon throws an error:
NoReverseMatch at /infrastructure/nodes/
u'identity' is not a registered namespace inside 'horizon'
In template /home/stack/horizon/openstack_dashboard/templates/_header.html, 
error at line 20

This is caused by the url pointing to nonexistent Identity panel.

To reproduce:
Add openstack_dashboard/local/enabled/_30_identity.py file to Horizon. File 
content:
DASHBOARD = 'identity'
DISABLED = True

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: identity

** Description changed:

- There is a link in the footer that points to the identity panel. This breaks 
the plugin mechanism as if Identity panel is disabled, Horizon throws an error:
+ There is a link in the header that points to the identity panel. This breaks 
the plugin mechanism as if Identity panel is disabled, Horizon throws an error:
  NoReverseMatch at /infrastructure/nodes/
  u'identity' is not a registered namespace inside 'horizon'
  In template /home/stack/horizon/openstack_dashboard/templates/_header.html, 
error at line 20
  
  This is caused by the url pointing to nonexistent Identity panel.
  
  To reproduce:
  Add openstack_dashboard/local/enabled/_30_identity.py file to Horizon. File 
content:
  DASHBOARD = 'identity'
  DISABLED = True

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399126

Title:
  itentity panel link in footer

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is a link in the header that points to the identity panel. This breaks 
the plugin mechanism as if Identity panel is disabled, Horizon throws an error:
  NoReverseMatch at /infrastructure/nodes/
  u'identity' is not a registered namespace inside 'horizon'
  In template /home/stack/horizon/openstack_dashboard/templates/_header.html, 
error at line 20

  This is caused by the url pointing to nonexistent Identity panel.

  To reproduce:
  Add openstack_dashboard/local/enabled/_30_identity.py file to Horizon. File 
content:
  DASHBOARD = 'identity'
  DISABLED = True

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399130] [NEW] Instances stucking at "build" stage

2014-12-04 Thread L00nix
Public bug reported:

Hello everybody

We have a problem with our openstack envoirenment. Sometimes when we try
to start an instance, it stucks at the "build" stage.

The problem does not appear anytime, its only sometimes, it also does
not matter which image we choose or which settings.

In the logfiles the only thing i see is this in the "scheduler.log"

2014-12-04 10:04:05.970 32066 DEBUG nova.openstack.common.periodic_task [-] 
Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks 
/usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178
2014-12-04 10:04:05.970 32066 DEBUG nova.openstack.common.loopingcall [-] 
Dynamic looping call sleeping for 60.00 seconds _inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py:132

(repeating)

All other logfiles seems to be ok. I have searched for this problem but
didn't find yet a solution for this.

About our envoirenment:

-we have 200 VCPU's
-we have 1.3 TB of RAM
-we have 15TB of Space
-we deployed it with mirantis

there are already a lot of instances which are currently running, but
the envoirenment is not overloaded, so starting new instances should be
possible without any trouble. Like i said, sometimes it works and
sometimes not.

I have to be honest, i dont know if this is really a bug or just a
misconfiguration of the envoirenment. But like i said i searched for
this problem and I didnt find any solution yet.

Hope we can get this problem out of the world. :)

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Hello everybody
  
  We have a problem with our openstack envoirenment. Sometimes when we try
- to start an instance, it stucks at the "spawning" stage.
+ to start an instance, it stucks at the "build" stage.
  
  The problem does not appear anytime, its only sometimes, it also does
  not matter which image we choose or which settings.
  
  In the logfiles the only thing i see is this in the "scheduler.log"
  
  2014-12-04 10:04:05.970 32066 DEBUG nova.openstack.common.periodic_task [-] 
Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks 
/usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178
  2014-12-04 10:04:05.970 32066 DEBUG nova.openstack.common.loopingcall [-] 
Dynamic looping call sleeping for 60.00 seconds _inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py:132
  
  (repeating)
  
  All other logfiles seems to be ok. I have searched for this problem but
  didn't find yet a solution for this.
  
  About our envoirenment:
  
  -we have 200 VCPU's
  -we have 1.3 TB of RAM
  -we have 15TB of Space
  -we deployed it with mirantis
  
  there are already a lot of instances which are currently running, but
  the envoirenment is not overloaded, so starting new instances should be
  possible without any trouble. Like i said, sometimes it works and
  sometimes not.
  
  I have to be honest, i dont know if this is really a bug or just a
  misconfiguration of the envoirenment. But like i said i searched for
  this problem and I didnt find any solution yet.
  
  Hope we can get this problem out of the world. :)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399130

Title:
  Instances stucking at "build" stage

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hello everybody

  We have a problem with our openstack envoirenment. Sometimes when we
  try to start an instance, it stucks at the "build" stage.

  The problem does not appear anytime, its only sometimes, it also does
  not matter which image we choose or which settings.

  In the logfiles the only thing i see is this in the "scheduler.log"

  2014-12-04 10:04:05.970 32066 DEBUG nova.openstack.common.periodic_task [-] 
Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks 
/usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178
  2014-12-04 10:04:05.970 32066 DEBUG nova.openstack.common.loopingcall [-] 
Dynamic looping call sleeping for 60.00 seconds _inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py:132

  (repeating)

  All other logfiles seems to be ok. I have searched for this problem
  but didn't find yet a solution for this.

  About our envoirenment:

  -we have 200 VCPU's
  -we have 1.3 TB of RAM
  -we have 15TB of Space
  -we deployed it with mirantis

  there are already a lot of instances which are currently running, but
  the envoirenment is not overloaded, so starting new instances should
  be possible without any trouble. Like i said, sometimes it works and
  sometimes not.

  I have to be honest, i dont know if this is really a bug or just a
  misconfiguration of the envoirenment. But like i said i searched for
  this problem and I didnt find any solution yet.

  Hope we can get this

[Yahoo-eng-team] [Bug 1399149] [NEW] Horizon does not link to an object when assigning an floating IP to a load balancer

2014-12-04 Thread Maish
Public bug reported:

When assigning a floating IP to a load balancer Horizon does not point
to any object.

Steps to reproduce:

Create Pool
Add VIP to created pool.

Allocate Floating IP to project
Associate IP to newly created VIP

Instance field is empty
When clicking on link - the instances page opens up and an error is presented.

Version Icehouse

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "ScreenShot of bug"
   
https://bugs.launchpad.net/bugs/1399149/+attachment/4274022/+files/2014-12-04_13-08-30.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399149

Title:
  Horizon does not link to an object when assigning an floating IP to a
  load balancer

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When assigning a floating IP to a load balancer Horizon does not point
  to any object.

  Steps to reproduce:

  Create Pool
  Add VIP to created pool.

  Allocate Floating IP to project
  Associate IP to newly created VIP

  Instance field is empty
  When clicking on link - the instances page opens up and an error is presented.

  Version Icehouse

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322771] Re: keystone install from source docs missing required steps

2014-12-04 Thread Doug Hellmann
** Changed in: pbr
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1322771

Title:
  keystone install from source docs missing required steps

Status in OpenStack Identity (Keystone):
  Triaged
Status in Python Build Reasonableness:
  Invalid

Bug description:
  The following is completely reproducible using Vagrant and the
  hashicorp/precise64 image.

  Following the install instructions here:
  http://docs.openstack.org/developer/keystone/installing.html
  #installing-from-source is impossible, even after the patch in
  https://bugs.launchpad.net/keystone/+bug/1322735

  There are several issues:

  "python setup.py install" fails with these missing dependencies:
  * missing Python.h
  * missing libxml/xmlversion.h

  Once those dependencies are taken care of by installing python-dev,
  libxml2-devm and libxslt1-dev, you can complete "setup.py install"

  Running "keystone" at that point gets an error because pbr.version is
  not present.

  pbr.version is listed in requirements.txt, but is not installed by
  "setup.py install" for some reason. Installing it resolves this issue.

  Running keystone-manage and keystone-all gives an error because
  repoze.lru is not installed. Repoze.lru is a dependency of Routes,
  which is installed by setup.py. For some reason, when setup.py
  installs Routes, it does not install the dependency. if, AFTER running
  setup.py, you attempt to "pip install -r requirements.txt" it will not
  install repoze.lru because Routes is already installed, and pip
  doesn't detect that the dependency is not installed.

  The following steps work reproducibly to install keystone from source
  on a completely clean hashicorp/precise64:

  # Vagrant setup in an empty directory
  vagrant init hashicorp/precise64
  vagrant up
  vagrant ssh
  # Generic Ubuntu Precise instructions begin here.
  sudo su
  apt-get update
  apt-get install -y git python-setuptools python-dev libxml2-dev libxslt1-dev
  easy_install pip
  cd /root
  git clone https://github.com/openstack/keystone.git
  cd keystone
  pip install -r requirements.txt
  python setup.py install

  At that point all three executables (keystone, keystone-manage, and
  keystone-all) will execute and display their non-configured / help
  message without dependency errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1322771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330771] Re: pbr as run time requirement conflicts with distro packaging

2014-12-04 Thread Doug Hellmann
I think the behaviors described in point 1 are actually pkg_resources,
which isn't a part of pbr. If that's incorrect, please provide more
detail and reopen the bug against pbr.

** Changed in: pbr
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1330771

Title:
  pbr as run time requirement conflicts with distro packaging

Status in OpenStack Identity (Keystone):
  Invalid
Status in Python Build Reasonableness:
  Won't Fix

Bug description:
  Using PBR for development makes sense, but it should not be a run time
  requirement for keystone-all or the other tools.  All it is doing is
  reporting the version of the python library, and that does not require
  any of the rest of PBR.  However, PBR pulls in tools that are
  rightfully build time requirements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1330771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399204] [NEW] AggregateTypeAffinityFilter cannot filter on multiple instance_type values

2014-12-04 Thread sean mooney
Public bug reported:

AggregateTypeAffinityFilter limits instance_type by aggregate

At present it is not possible to specify multiple instance_types for an
Aggregate with this filter.

This prevent operators form creating a single host aggregate for a group of 
related flavors.
For example a host aggregate for all flavors that support hugepages  or a host 
aggregate for all flavors that support
ssds.

Without this functionality the operator would have to create one host
aggregate per flavor/instance_type and  add the same host to multiple
aggregates.

** Affects: nova
 Importance: Undecided
 Assignee: sean mooney (sean-k-mooney)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399204

Title:
  AggregateTypeAffinityFilter cannot filter on multiple instance_type
  values

Status in OpenStack Compute (Nova):
  New

Bug description:
  AggregateTypeAffinityFilter limits instance_type by aggregate

  At present it is not possible to specify multiple instance_types for
  an Aggregate with this filter.

  This prevent operators form creating a single host aggregate for a group of 
related flavors.
  For example a host aggregate for all flavors that support hugepages  or a 
host aggregate for all flavors that support
  ssds.

  Without this functionality the operator would have to create one host
  aggregate per flavor/instance_type and  add the same host to multiple
  aggregates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399219] [NEW] Collision possibility in random string creation for resources names

2014-12-04 Thread Daniel Korn
Public bug reported:

Currently, the way resources names are generated for the integration tests is 
by concatenating a random integer of a certain interval to a generic string. 
for example: IMAGE_NAME = 'horizonimage' + str(random.randint(0, 1000))

In view of the fact we need unique resource names for the tests, we need
to reduce the possibility of non-trivial failure rates due to
collisions.

One approach, raised in a discussion here: 
https://review.openstack.org/#/c/121506/10/openstack_dashboard/test/integration_tests/tests/test_image_create_delete.py
suggests to implement a method that will return random strings, and will be 
used by all tests.

Other offers:
* IMAGE_NAME = 'horizonimage' + str(uuid.uuid4()) 
* expandung the interval

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: collisions integration-tests random-resource-names

** Tags added: collisions integration-tests

** Tags added: random-resource-names

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399219

Title:
  Collision possibility in random string creation for resources names

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently, the way resources names are generated for the integration tests is 
by concatenating a random integer of a certain interval to a generic string. 
  for example: IMAGE_NAME = 'horizonimage' + str(random.randint(0, 1000))

  In view of the fact we need unique resource names for the tests, we
  need to reduce the possibility of non-trivial failure rates due to
  collisions.

  One approach, raised in a discussion here: 
https://review.openstack.org/#/c/121506/10/openstack_dashboard/test/integration_tests/tests/test_image_create_delete.py
  suggests to implement a method that will return random strings, and will be 
used by all tests.

  Other offers:
  * IMAGE_NAME = 'horizonimage' + str(uuid.uuid4()) 
  * expandung the interval

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399244] [NEW] rbd resize revert fails

2014-12-04 Thread Jon Bernard
Public bug reported:

In Ceph CI, the revert-resize server test is failing.  It appears that
revert_resize() does not take shared storage into account and deletes
the orignal volume, which causes the start of the original instance to
fail.

** Affects: nova
 Importance: Undecided
 Assignee: Jon Bernard (jbernard)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jon Bernard (jbernard)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399244

Title:
  rbd resize revert fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  In Ceph CI, the revert-resize server test is failing.  It appears that
  revert_resize() does not take shared storage into account and deletes
  the orignal volume, which causes the start of the original instance to
  fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399249] [NEW] Neutron openvswitch-agent doesn't recover ports from binding_failed status

2014-12-04 Thread Yair Fried
Public bug reported:

Ports created when neutron-openvswitch-agent is down are in status down
and "binding:vif_type=binding_failed" which is as it should be. When the
agent is rebooted it should be able to recreate the ports according to
the DB, but instead it logs a WARNING and creates the port with status
DOWN. only solution is to delete the port and create it again

>From agent log:
2014-12-04 16:53:00.559 16319 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2dcc9141-7439-450a-bb2a-fe31ab577f47 None] Device 
3dc73917-93b1-4f6d-a2e1-90c74cea6de7 not defined on plugin


Recreation steps:

shut down ovs-agent and wait for neutron to notice:

[root@RHEL7Server ~]# systemctl stop neutron-openvswitch-agent.service 
[root@RHEL7Server ~(keystone_admin)]# neutron agent-list | grep open
| 2d97bbd1-b937-4b19-8205-4167bbcb659d | Open vSwitch agent | node_29  | xxx   
| True   | neutron-openvswitch-agent |

create router and attach it to network
[root@RHEL7Server ~(keystone_admin)]# neutron router-create myrouter --ha False
Created a new router:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| distributed   | False|
| external_gateway_info |  |
| ha| False|
| id| 8210f453-2a17-400e-ae32-74aa1503d0a5 |
| name  | myrouter |
| routes|  |
| status| ACTIVE   |
| tenant_id | 183611eb84204b839e43d97c081973c0 |
+---+--+
[root@RHEL7Server ~(keystone_admin)]# neutron router-interface-add myrouter 
private
Added interface 3dc73917-93b1-4f6d-a2e1-90c74cea6de7 to router myrouter.
[root@RHEL7Server ~(keystone_admin)]# neutron l3-agent-list-hosting-router 
myrouter
+--+-++---+
| id   | host| admin_state_up | alive |
+--+-++---+
| 0110d49c-59dd-496c-a2a3-549a2ad4ba4d | node_29 | True   | :-)   |
+--+-++---+

Port will show status DOWN, and "binding_failed"
[root@RHEL7Server ~(keystone_admin)]# neutron port-show 
3dc73917-93b1-4f6d-a2e1-90c74cea6de7
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   | node_29   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | binding_failed
  |
| binding:vnic_type | normal
  |
| device_id | 8210f453-2a17-400e-ae32-74aa1503d0a5  
  |
| device_owner  | network:router_interface  
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {"subnet_id": "d8881a14-bd8b-4595-b497-8da6587a46c1", 
"ip_address": "10.0.0.1"} |
| id| 3dc73917-93b1-4f6d-a2e1-90c74cea6de7  
  |
| mac_address   | fa:16:3e:db:0f:9b 
  |
| name  |   
  |
| network_id| 6091abc0-4fdf-402d-aaf0-3a955fabd6b7  
  |
| security_groups   |   
  |
| status| DOWN  
  |
| tenant_id | 183611eb84204b839e43d97c081973c0  
  |
+---

[Yahoo-eng-team] [Bug 1399254] [NEW] Missing the option to launch an instance with an existing port

2014-12-04 Thread Itzik Brown
Public bug reported:

When launching an instance using the Dashboard it's only possible to add 
Virtual interfaces by choosing networks.
The option to launch an instance with an existing port is missing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399254

Title:
  Missing the option to launch an instance with an existing port

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching an instance using the Dashboard it's only possible to add 
Virtual interfaces by choosing networks.
  The option to launch an instance with an existing port is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399252] [NEW] Missing port-create in Dashboard

2014-12-04 Thread Itzik Brown
Public bug reported:

Right now there is no option to create a port using the Dashboard.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399252

Title:
  Missing port-create in Dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Right now there is no option to create a port using the Dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396318] Re: Cannot read text in region selection box

2014-12-04 Thread Chuck Short
** Also affects: horizon (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: horizon (Ubuntu)
   Status: Invalid => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1396318

Title:
  [SRU] Cannot read text in region selection box

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in horizon package in Ubuntu:
  Won't Fix
Status in horizon source package in Trusty:
  In Progress

Bug description:
  [Impact]
  When multiple regions are configured within Horizon, the region selection 
dropdown box in the top right corner of the horizon dashboard is not visible 
due to white text on white background.

  The fix is a minimal update to the ubuntu theme css that fixes the
  colors of the region selection dropdown box.

  [Test Case]
  Steps to reproduce:

  1. Install horizon
  2. Configure to use multiple regions within keystone (or multiple keystones, 
1 per region)
  3. log in
  4. Observe the region selection box in the top right is white on white text.

  Note that additional setup is required to recreate this when deploying
  from the openstack charms. See
  https://bugs.launchpad.net/charms/+source/openstack-
  dashboard/+bug/1398186.

  [Regression Potential]
  Any potential regressions would affect the ubuntu theme of the horizon 
dashboard (background/text colors, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1396318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396318] Re: Cannot read text in region selection box

2014-12-04 Thread Corey Bryant
** Description changed:

- When multiple regions are configured within Horizon, the region
- selection dropdown box in the top right corner of the horizon dashboard
- is not visible due to white text on white background.
+ [Impact]
+ When multiple regions are configured within Horizon, the region selection 
dropdown box in the top right corner of the horizon dashboard is not visible 
due to white text on white background.
  
+ The fix is a minimal update to the ubuntu theme css.
+ 
+ [Test Case]
  Steps to reproduce:
  
  1. Install horizon
  2. Configure to use multiple regions within keystone (or multiple keystones, 
1 per region)
  3. log in
  4. Observe the region selection box in the top right is white on white text.
+ 
+ [Regression Potential]
+ Any potential regressions would affect the ubuntu theme of the horizon 
dashboard (background/text colors, etc).

** Changed in: horizon (Ubuntu)
   Status: In Progress => Fix Committed

** Changed in: horizon (Ubuntu)
   Status: Fix Committed => Invalid

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1396318

Title:
  [SRU] Cannot read text in region selection box

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in horizon package in Ubuntu:
  Won't Fix
Status in horizon source package in Trusty:
  In Progress

Bug description:
  [Impact]
  When multiple regions are configured within Horizon, the region selection 
dropdown box in the top right corner of the horizon dashboard is not visible 
due to white text on white background.

  The fix is a minimal update to the ubuntu theme css that fixes the
  colors of the region selection dropdown box.

  [Test Case]
  Steps to reproduce:

  1. Install horizon
  2. Configure to use multiple regions within keystone (or multiple keystones, 
1 per region)
  3. log in
  4. Observe the region selection box in the top right is white on white text.

  Note that additional setup is required to recreate this when deploying
  from the openstack charms. See
  https://bugs.launchpad.net/charms/+source/openstack-
  dashboard/+bug/1398186.

  [Regression Potential]
  Any potential regressions would affect the ubuntu theme of the horizon 
dashboard (background/text colors, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1396318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394034] Re: No ports available for users without admin role

2014-12-04 Thread Timur Sufiev
Sorry, then we really can't help, because all kinds of weird things
could happen when Dashboard 2014.2 is linked with Neutron 2013.2. Please
try to reproduce it on pure 2014.2 or 2014.1 OpenStack and reopen the
bug if it persists there.

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394034

Title:
  No ports available for users without admin role

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When regular user (without admin role) tries to associate a Floating IP with 
an instance, the list of ports is empty.
  When the user has admin role, the problem doesn't appear. I believe the 
regular user does not have access to something that is needed to ensure 
"reachability" of the the network from Floating IP's router.

  This problem does not have anything to do with similar reports related
  to DVR (decentralized routers).

  This bug is caused by fix to bug #1252403. Going one commit before
  that bug's fix was commited solves the problem.

  Going back in RDO packages to openstack-
  dashboard-2014.2-0.2.el7.centos.noarch.rpm (from Sep 16) also solves
  the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399268] [NEW] All integration tests fails on both log_out action in teardown and navigation between pages

2014-12-04 Thread Daniel Korn
Public bug reported:

All tests in gate-horizon-dsvm-integration are currently failing.

>From a first look at the logs it seems they fail on two actions:

1. trying to perform log out. we already had this problem
(https://bugs.launchpad.net/horizon/+bug/1391890) and it was fixed in
https://review.openstack.org/#/c/135273/. I'm not sure what causing this
fail again.

2. trying to navigate to a different page. the first guess was that a
patch that changed the sidebar layout caused it
(https://review.openstack.org/#/c/126289/), but it passed the gate.


( 1 
)---
==
ERROR: 
openstack_dashboard.test.integration_tests.tests.test_keypair.TestKeypair.test_keypair
--
_StringException: traceback-1: {{{
Traceback (most recent call last):
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 65, in tearDown
self.home_pg.log_out()
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/basepage.py",
 line 59, in log_out
self.topbar.user_dropdown_menu.click_on_logout()
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/menus.py",
 line 183, in click_on_logout
self.logout_link.click()
File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py",
 line 65, in click
self._execute(Command.CLICK_ELEMENT)
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/webdriver.py",
 line 105, in _execute
params)
File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py",
 line 385, in _execute
return self._parent.execute(command, params)
File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 173, in execute
self.error_handler.check_response(response)
File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py",
 line 166, in check_response
raise exception_class(message, screen, stacktrace)
ElementNotVisibleException: Message: Element is not currently visible 
and so may not be interacted with
Stacktrace:
at fxdriver.preconditions.visible 
(file:///tmp/tmpkH19A8/extensions/fxdri...@googlecode.com/components/command-processor.js:8959:12)
at DelayedCommand.prototype.checkPreconditions_ 
(file:///tmp/tmpkH19A8/extensions/fxdri...@googlecode.com/components/command-processor.js:11618:15)
at DelayedCommand.prototype.executeInternal_/h 
(file:///tmp/tmpkH19A8/extensions/fxdri...@googlecode.com/components/command-processor.js:11635:11)
at fxdriver.Timer.prototype.setTimeout/<.notify 
(file:///tmp/tmpkH19A8/extensions/fxdri...@googlecode.com/components/command-processor.js:548:5)

(
2
)---

Traceback (most recent call last):
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_keypair.py",
 line 26, in test_keypair
keypair_page = 
self.home_pg.go_to_accessandsecurity_keypairspage()
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/navigation.py",
 line 268, in __call__
return Navigation._go_to_page(args[0], self.path, 
self.page_class)
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/navigation.py",
 line 204, in _go_to_page
self._go_to_side_menu_page(path[:self.SIDE_MENU_MAX_LEVEL])
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/navigation.py",
 line 224, in _go_to_side_menu_page
self.navaccordion.click_on_menu_items(*menu_items)
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/menus.py",
 line 116, in click_on_menu_items
self.get_first_level_selected_item)
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/menus.py",
 line 97, in _click_menu_item
self._click_item(text, loc_craft_func)
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/menus.py",
 line 107, in _click_item
item = self._get_element(*item_locator)
File 
"/opt/stack/new/horizon/openstack_dashboard/test/integrati

[Yahoo-eng-team] [Bug 1399280] [NEW] FWaaS extension doesn't register its quota resources

2014-12-04 Thread Ralf Haferkamp
Public bug reported:

This issue is basically the same as
https://bugs.launchpad.net/neutron/+bug/1305957 . Just for the FWaaS
extension. In short FWaaS misses to pass the register_quota=True
argument to build_resource_info().

** Affects: neutron
 Importance: Undecided
 Assignee: Ralf Haferkamp (rhafer)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Ralf Haferkamp (rhafer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399280

Title:
  FWaaS extension doesn't register its quota resources

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  This issue is basically the same as
  https://bugs.launchpad.net/neutron/+bug/1305957 . Just for the FWaaS
  extension. In short FWaaS misses to pass the register_quota=True
  argument to build_resource_info().

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330065] Re: VMWare - Driver does not ignore Datastore in maintenance mode

2014-12-04 Thread Davanum Srinivas (DIMS)
** No longer affects: oslo.vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330065

Title:
  VMWare - Driver does not ignore Datastore in maintenance mode

Status in OpenStack Compute (Nova):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  A datastore can be in maintenance mode. The driver does not ignore it
  both in stats update and while spawing instances.

  During stats update, a wrong stats update is returned if a datastore
  is in maintenance mode.

  Also during spawing, if a datastore in maintenance mode gets choosen,
  since it had the largest disk space, the spawn would fail.

  The driver should ignore datastore in maintenance mode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309753] Re: VMware: datastore_regex not used while sending disk stats

2014-12-04 Thread Davanum Srinivas (DIMS)
** No longer affects: oslo.vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309753

Title:
  VMware: datastore_regex not used while sending disk stats

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  
  VMware VCDriver uses datastore_regex to match datastores (disk abstraction) 
associated with a compute host which can be used for provisioning instances. 
But it does not use datastore_regex while reporting disk stats. As a result, 
when this option is enabled, resource tacker may see different disk usage than 
what's computed while spawning the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399308] [NEW] Identity panels Status column empty value is not translatable

2014-12-04 Thread Cindy Lu
Public bug reported:

enabled = tables.Column('enabled', verbose_name=_('Enabled'),
status=True,
status_choices=STATUS_CHOICES,
empty_value="False")

"False" is not translatable

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/groups/tables.py#L198

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399308

Title:
  Identity panels Status column empty value is not translatable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  enabled = tables.Column('enabled', verbose_name=_('Enabled'),
  status=True,
  status_choices=STATUS_CHOICES,
  empty_value="False")

  "False" is not translatable

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/groups/tables.py#L198

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2014-12-04 Thread Alan Pevec
** Also affects: cinder/juno
   Importance: Undecided
   Status: New

** Changed in: cinder/juno
   Status: New => Fix Committed

** Changed in: cinder/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Committed
Status in Cinder icehouse series:
  In Progress
Status in Cinder juno series:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance icehouse series:
  New
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron icehouse series:
  New
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334647] Re: Nova api service doesn't handle SIGHUP signal properly

2014-12-04 Thread Alan Pevec
** Also affects: cinder/juno
   Importance: Undecided
   Status: New

** Changed in: cinder/juno
   Status: New => Fix Committed

** Changed in: cinder/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334647

Title:
  Nova api service doesn't handle SIGHUP signal properly

Status in Cinder:
  Fix Committed
Status in Cinder juno series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Invalid

Bug description:
  When SIGHUP signal is send to nova-api service, it doesn't complete
  processing of all pending requests before terminating all the
  processes.

  Steps to reproduce:

  1. Run nova-api service as a daemon.
  2. Send SIGHUP signal to nova-api service.
 kill -1 

  After getting SIGHUP signal all the processes of nova-api stops instantly, 
without completing existing request, which might cause failure.
  Ideally after getting the SIGHUP signal nova-api process should stop getting 
new requests and wait for existing requests to complete before terminating all 
the processes and restarting all nova-api processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1334647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378215] Re: If db deadlock occurs for some reason while deleting an image, no one can delete the image any more

2014-12-04 Thread Alan Pevec
** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Changed in: glance/juno
   Status: New => Fix Committed

** Changed in: glance/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378215

Title:
  If db deadlock occurs for some reason while deleting an image, no one
  can delete the image any more

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance juno series:
  Fix Committed

Bug description:
  Glance api returns 500 Internal Server Error, if db deadlock occurs in 
glance-registry for some reason while deleting an image. 
  The image 'status' is set to deleted and 'deleted' is set to False. As 
deleted is still False, the image is visible in image list but it can not be 
deleted any more.

  If you try to delete this image it will raise 404 (Not Found) error
  for V1 api and 500 (HTTPInternalServerError) for V2 api.

  Note:
  To reproduce this issue I've explicitly raised "db_exception.DBDeadlock" 
exception from "_image_child_entry_delete_all" method under 
"\glance\db\sqlalchemy\api.py".

  glance-api.log
  --
  2014-10-06 00:53:10.037 6827 INFO glance.registry.client.v1.client 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c423
  1bcc17d8c55c83997 - - -] Registry client request DELETE 
/images/f9f8a40d-530b-498c-9fbc-86f29da555f4 raised ServerError
  2014-10-06 00:53:10.045 6827 INFO glance.wsgi.server 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c4231bcc17d8c55c83997 - - -] Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 433, 
in handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/glance/glance/common/wsgi.py", line 394, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/osprofiler/web.py", line 106, 
in __call__
  return request.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
748, in __call__
  return self._call_app(env, start_response)
File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
684, in _call_app
  return self._app(env, _fake_start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/glance/glance/common/wsgi.py", line 394, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/glance/glance/common/wsgi.py", line 394, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_respon

[Yahoo-eng-team] [Bug 1387973] Re: Normal user not able to download image if protected property is not associated with the image with restrict-download policy

2014-12-04 Thread Alan Pevec
** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Changed in: glance/juno
   Status: New => Fix Committed

** Changed in: glance/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1387973

Title:
  Normal user not able to download image if protected property is not
  associated with the image with restrict-download policy

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance juno series:
  Fix Committed

Bug description:
  If restrict download rule is configured in policy.json, and image is
  added without protected property mentioned in "restricted" rule, then
  normal users (other than admin) not able to download the image.

  Steps to reproduce:

  1. Create normal_user with _member_ role using horizon

  2. Configure download rule in policy.json

     "download_image": "role:admin or rule:restricted",
     "restricted": "not ('test_1234':%(test_key)s and role:_member_)",

  3. Restart glance-api service

  4. create image without property 'test_key' with admin user

     i. source devstack/openrc admin admin
     ii. glance image-create
     iii. glance image-update  --name non_protected --disk-format 
qcow2 --container-format bare --is-public True --file /home/openstack/api.log

  5. Try to download the newly created image with normal_user.

     i. source devstack/openrc normal_user admin
     ii. glance image-download 

  It returns 403 Forbidden response to the user, where as admin user can
  download the image successfully.

  Expected behavior is all users can download the images if restricted
  property is not added.

  Note:
  https://review.openstack.org/#/c/127923/ 
  The above policy sync patch will solve this issue for Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1387973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1198566] Re: expired image location url cause glance client errors

2014-12-04 Thread Alan Pevec
** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Changed in: glance/juno
   Status: New => Fix Committed

** Changed in: glance/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1198566

Title:
  expired image location url cause glance client errors

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance juno series:
  Fix Committed

Bug description:
  We have a multi-node Openstack cluster running Folsom 2012.2.3 on
  Ubuntu 12.04. A few days ago we added a new compute node, and found
  that we were unable to launch new instances from a pre-existing Ubuntu
  Server 12.04 LTS image stored in glance. Each spawning attempt would
  deposit a glance client exception (shown below) in the compute node's
  nova-compute.log.

  After quite a lot of investigation, I found that the original
  --location URL (used during "glance image-create") of the Ubuntu
  Server image had gone out of date. This was evidently causing a
  BadStoreUri exception on the glance server during instance spawning,
  resulting in a 500 error being returned to our new compute node's
  glance client. I was able to resolve the problem by re-importing an
  Ubuntu Server 12.04 LTS image from a working mirror.

  Improved error logging would have saved us hours of troubleshooting.

  2013-06-27 21:19:24 ERROR nova.compute.manager 
[req-f8d7c23a-e8ad-4059-bea4-4fc588a6afe0 9d8968d3f17f4697aaf923c14651ce7b 
e5fb3c6db0db4e9c86d0301005e2e5bb] [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] Instance failed to spawn
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] Traceback (most recent call last):
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 756, in _spawn
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] block_device_info)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] temp_level, payload)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] self.gen.next()
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] return f(*args, **kw)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1099, in 
spawn
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] admin_pass=admin_password)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1365, in 
_create_image
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] project_id=instance['project_id'])
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 131, 
in cache
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] *args, **kwargs)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 178, 
in create_image
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] prepare_template(target=base, *args, 
**kwargs)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 795, in inner
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] retval = f(*args, **kwargs)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 122, 
in call_if_not_exi

[Yahoo-eng-team] [Bug 1154140] Re: Add excutils.save_and_reraise_exception()

2014-12-04 Thread Doug Hellmann
** Changed in: oslo.messaging
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1154140

Title:
  Add excutils.save_and_reraise_exception()

Status in OpenStack Compute (Nova):
  Incomplete
Status in Messaging API for OpenStack:
  Invalid

Bug description:
  See https://review.openstack.org/23894

   |--LOG.exception()
   | |--LoggerAdapter.exception():logging/__init()__
   | |--self.logger.error()
   | |--self._log()
   | |--self.handle()
   | |--self.callHandlers()
   | |--hdlr.handle()
   | |--Handler.handle():handlers.py
   | |--self.emit()
   | |--self.socket.sendto()
   | |--GreenSocket.sendto():eventlet/greenio.py
   | |--trampoline():hubs/__init__.py
   | |--hub.switch()
   | |--BaseHub.switch():hub.py
   | |-- clear_sys_exc_info()

  When you're using syslog logging, LOG.exception() can cause
  sys.exc_info() to be cleared

  So if you do e.g.

   except Exception:
   LOG.exception(_('in looping call'))
  done.send_exception(*sys.exc_info())
  return

  then you'll find that (with syslog enabled) the second reference to
  sys.exc_info() won't work

  Basically, any time you make a call that can result in a greenlet
  context switch you can find sys.exc_info() has been cleared

  The really nasty thing is this only happens with syslog so you don't
  find it under normal testing

  I'm thinking we need something like:

with excutils.save_and_restore_exception():
LOG.exception(...)
 ctxt.reply(...)

  which would be equivalent to:

except Exception:
try:
with excutils.save_and_reraise_exception():
LOG.exception(...)
except Exception:
ctxt.reply(...)

  however, I'm not sure there's a way of restoring sys.exc_info() that
  will work with python 3 - AFAIK, in python3 exc_info is only valid
  during an exception handler

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1154140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399453] [NEW] Nexus VXLAN gateway: VM with 2 interfaces to the same subnet delete issues

2014-12-04 Thread Danny Choi
Public bug reported:

With Nexus VXLAN gateway, there are delete issues with the last VM that
has 2 interfaces to the same subnet.

1. When one interface is deleted, all the VLAN/VNI mapping
configurations are deleted at the Nexus switch.

2. When the last interface or the VM is deleted, traceback is logged in
screen-q-svc.log.

2014-12-03 18:03:38.433 ERROR neutron.plugins.ml2.managers 
[req-dda788e5-759f-4caf-81f6-a31b43025ede demo 
f0fd7da7d2874c1590a0092aab9014c3] Mechanism driver 'cisco_nexus' failed in 
delete_port_postcommit
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers Traceback (most 
recent call last):
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 299, in 
_call_on_drivers
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py",
 line 400, in delete_port_postcommit
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
self._delete_nve_member) if vxlan_segment else 0
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py",
 line 325, in _port_action_vxlan
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers func(vni, 
device_id, mcast_group, host_id)
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py",
 line 155, in _delete_nve_member
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers vni)
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_network_driver.py",
 line 253, in delete_nve_member
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
self._edit_config(nexus_host, config=confstr)
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_network_driver.py",
 line 80, in _edit_config
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers raise 
cexc.NexusConfigFailed(config=config, exc=e)
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers NexusConfigFailed: 
Failed to configure Nexus: 
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   
<__XML__MODE__exec_configure>
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers nve1
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
<__XML__MODE_if-nve>
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers no 
member vni 9000
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 

2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   

2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers . Reason: ERROR: VNI 
delete validation failed
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers .
2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
2014-12-03 18:03:38.435 ERROR neutron.plugins.ml2.plugin 
[req-dda788e5-759f-4caf-81f6-a31b43025ede demo 
f0fd7da7d2874c1590a0092aab9014c3] mechanism_manager.delete_port_postcommit 
failed for port da89ec67-e825-4a52-8dfa-6a2556624a9e

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399453

Title:
  Nexus VXLAN gateway: VM with 2 interfaces to the same subnet delete
  issues

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  With Nexus VXLAN gateway, there are delete issues with the last VM
  that has 2 interfaces to the same subnet.

  1. When one interface is deleted, all the VLAN/VNI mapping
  configurations are deleted at the Nexus switch.

  2. When the last interface or the VM is deleted, traceback is logged
  in screen-q-svc.log.

  2014-12-03 18:03:38.433 ERROR neutron.plugins.ml2.managers 
[req-dda788e5-759f-4caf-81f6-a31b43025ede demo 
f0fd7da7d2874c1590a0092aab9014c3] Mechanism driver 'cisco_nexus' failed in 
delete_port_postcommit
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers Traceback (most 
recent call last):
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 299, in 
_call_on_drivers
  2014

[Yahoo-eng-team] [Bug 1399454] [NEW] Nexus VXLAN gateway: 4K VLANs limitation

2014-12-04 Thread Danny Choi
Public bug reported:

With the Nexus VXLAN gateway, each Compute host still has the 4K VLANs
limitation.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399454

Title:
  Nexus VXLAN gateway: 4K VLANs limitation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  With the Nexus VXLAN gateway, each Compute host still has the 4K VLANs
  limitation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399462] [NEW] Incorrect iptables INPUT rules on l3-agent for metadata proxy

2014-12-04 Thread Cedric Brandily
Public bug reported:

On the l3-agent, 2 iptables rules are defined  to ensure the metadata proxy is 
reachable from vms on 169.254.169.254:80:
* REDIRECT 169.254.169.254:80 packets to the router on port 9697(metadata proxy 
port)
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 
80 -j REDIRECT --to-ports 9697
* ACCEPT traffic to 127.0.0.1 on port 9697
-A neutron-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j 
ACCEPT

The 2nd rule is invalid as REDIRECT replaces destination ip by:
 * router ip (the one on the input interface)
 * 127.0.0.1 if the packet is a LOCAL packet (not metadata proxy case).


So ACCEPT rule filter is not matched ... the metadata proxy is only reachable 
because INPUT policy is ACCEPT.

** Affects: neutron
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399462

Title:
  Incorrect iptables INPUT rules on l3-agent for metadata proxy

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On the l3-agent, 2 iptables rules are defined  to ensure the metadata proxy 
is reachable from vms on 169.254.169.254:80:
  * REDIRECT 169.254.169.254:80 packets to the router on port 9697(metadata 
proxy port)
  -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp 
--dport 80 -j REDIRECT --to-ports 9697
  * ACCEPT traffic to 127.0.0.1 on port 9697
  -A neutron-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j 
ACCEPT

  The 2nd rule is invalid as REDIRECT replaces destination ip by:
   * router ip (the one on the input interface)
   * 127.0.0.1 if the packet is a LOCAL packet (not metadata proxy case).

  
  So ACCEPT rule filter is not matched ... the metadata proxy is only reachable 
because INPUT policy is ACCEPT.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226944] Re: Cinder API v2 support

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1226944

Title:
  Cinder API v2 support

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  When clicking on the volumes tab in both the Admin and the project
  panels I get the following stack trace

  Traceback:
  File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
get_response
115. response = callback(request, *callback_args, 
**callback_kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
38. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
86. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
54. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
38. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
86. return view_func(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
view
68. return self.dispatch(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
dispatch
86. return handler(request, *args, **kwargs)
  File "/opt/horizon/horizon/tables/views.py" in get
155. handled = self.construct_tables()
  File "/opt/horizon/horizon/tables/views.py" in construct_tables
146. handled = self.handle_table(table)
  File "/opt/horizon/horizon/tables/views.py" in handle_table
118. data = self._get_data_dict()
  File "/opt/horizon/horizon/tables/views.py" in _get_data_dict
44. data.extend(func())
  File "/opt/horizon/openstack_dashboard/dashboards/admin/volumes/views.py" in 
get_volumes_data
48. self._set_id_if_nameless(volumes, instances)
  File "/opt/horizon/openstack_dashboard/dashboards/project/volumes/views.py" 
in _set_id_if_nameless
72. if not volume.display_name:
  File "/usr/local/lib/python2.7/dist-packages/cinderclient/base.py" in 
__getattr__
268. raise AttributeError(k)

  Exception Type: AttributeError at /admin/volumes/
  Exception Value: display_name

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1226944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384116] Re: Missing borders for "Actions" column in Firefox

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384116

Title:
  Missing borders for "Actions" column in Firefox

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  In Firefox only, some rows are still missing borders in "Actions"
  column. Moreover, the title row itself still should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383916] Re: instance status in instance details screen is not translated

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1383916

Title:
  instance status in instance details screen is not translated

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  In Project/Admin->Compute->Instances->Instance Detail the status is
  reported in English.  This should match the translated status shown in
  the instance table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1383916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382825] Re: jshint networktopolgy missing semicolon

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382825

Title:
  jshint networktopolgy missing semicolon

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  fix
  Running jshint ...
  2014-10-18 15:45:15.907 | 
horizon/static/horizon/js/horizon.networktopology.js: line 552, col 70, Missing 
semicolon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385485] Re: Image metadata dialog has hardcoded segments

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1385485

Title:
  Image metadata dialog has hardcoded segments

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  Admin->Images->Update Metadata.

  Note "Other" and "Filter" are not translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1385485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391524] Re: alternate navigation broken

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1391524

Title:
  alternate navigation broken

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  horizon_dashboard_nav relies on menu organized in PanelGroups. When
  not using a PanelGroup, e.g in Identity dashboard, the last level of
  navigation is broken.

  This is apparently not an issue with Accordion Navigation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1391524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386727] Re: Cinder API v2 support instance view

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1386727

Title:
  Cinder API v2 support instance view

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  There was a bug report: https://bugs.launchpad.net/bugs/1226944 to fix
  Horizon to communicate with Cinder v2 API.

  But there's a problem still exists for the instance view.

  
  If you're using cinder v2 you will get an error for instance view:

  
https://github.com/openstack/horizon/blob/stable/juno/openstack_dashboard/api/nova.py#L720
  
https://github.com/openstack/horizon/blob/stable/icehouse/openstack_dashboard/api/nova.py#L668

  => should be
  volume.name = volume_data.name


  Reproduce:

  * Add new cinder endpoint (API v2) 
  * Login to Horizon 
  * Create instance
  * Show instance details => 500


  [Tue Oct 28 12:26:29 2014] [error] Internal Server Error: 
/project/instances/cd38d21d-0281-40cf-b31b-c39f27f62ea8/
  [Tue Oct 28 12:26:29 2014] [error] Traceback (most recent call last):
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 112, in 
get_response
  [Tue Oct 28 12:26:29 2014] [error] response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
  [Tue Oct 28 12:26:29 2014] [error] return view_func(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 54, in dec
  [Tue Oct 28 12:26:29 2014] [error] return view_func(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
  [Tue Oct 28 12:26:29 2014] [error] return view_func(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 69, in 
view
  [Tue Oct 28 12:26:29 2014] [error] return self.dispatch(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 87, in 
dispatch
  [Tue Oct 28 12:26:29 2014] [error] return handler(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 71, in get
  [Tue Oct 28 12:26:29 2014] [error] context = 
self.get_context_data(**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 251, in get_context_data
  [Tue Oct 28 12:26:29 2014] [error] context = super(DetailView, 
self).get_context_data(**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 56, in 
get_context_data
  [Tue Oct 28 12:26:29 2014] [error] exceptions.handle(self.request)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 51, in 
get_context_data
  [Tue Oct 28 12:26:29 2014] [error] tab_group = 
self.get_tabs(self.request, **kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 287, in get_tabs
  [Tue Oct 28 12:26:29 2014] [error] instance = self.get_data()
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/utils/memoized.py", line 90, in 
wrapped
  [Tue Oct 28 12:26:29 2014] [error] value = cache[key] = func(*args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 273, in get_data
  [Tue Oct 28 12:26:29 2014] [error] redirect=redirect)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 261, in get_data
  [Tue Oct 28 12:26:29 2014] [error] instance_id)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py",
 line 668, in instance_volumes_list
  [Tue Oct 28 12:26:29 2014] [error] volume.name = volume_data.display_name
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 271, in 
__getattr__
  [Tue Oct 28 12:26:2

[Yahoo-eng-team] [Bug 1323599] Re: Network topology: some terms still not translatable

2014-12-04 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1323599

Title:
  Network topology: some terms still not translatable

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  Bug 1226159 did a great job to improve the translatability of the
  network topology's popup windows. However there are still words that
  don't show as translated (see screenshot):

   - The resource type, like "instance" or "router" 
   - Buttons, like Terminate instance
   - Perhaps it would also be good to make use of the status translated list 
(see 
https://github.com/openstack/horizon/blob/8314fb1367/openstack_dashboard/dashboards/project/instances/tables.py#L687)
 for the Instance Status

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1323599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397796] Re: alembic v. 0.7.1 will support "remove_fk" and others not expected by heal_script

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397796

Title:
  alembic v. 0.7.1 will support "remove_fk" and others not expected by
  heal_script

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  neutron/db/migration/alembic_migrations/heal_script.py seems to have a
  hardcoded notion of what commands Alembic is prepared to pass within
  the execute_alembic_command() call.   When Alembic 0.7.1 is released,
  the tests in neutron.tests.unit.db.test_migration will fail as
  follows:

  Traceback (most recent call last):
File "neutron/tests/unit/db/test_migration.py", line 194, in 
test_models_sync
  self.db_sync(self.get_engine())
File "neutron/tests/unit/db/test_migration.py", line 136, in db_sync
  migration.do_alembic_command(self.alembic_config, 'upgrade', 'head')
File "neutron/db/migration/cli.py", line 61, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/command.py",
 line 165, in upgrade
  script.run_env()
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/script.py",
 line 382, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/util.py",
 line 241, in load_python_file
  module = load_module_py(module_id, path)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/compat.py",
 line 79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File "neutron/db/migration/alembic_migrations/env.py", line 109, in 

  run_migrations_online()
File "neutron/db/migration/alembic_migrations/env.py", line 100, in 
run_migrations_online
  context.run_migrations()
File "", line 7, in run_migrations
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/environment.py",
 line 742, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/migration.py",
 line 305, in run_migrations
  step.migration_fn(**kw)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py",
 line 32, in upgrade
  heal_script.heal()
File "neutron/db/migration/alembic_migrations/heal_script.py", line 81, 
in heal
  execute_alembic_command(el)
File "neutron/db/migration/alembic_migrations/heal_script.py", line 92, 
in execute_alembic_command
  METHODS[command[0]](*command[1:])
  KeyError: 'remove_fk'
  

  I'll send a review for the obvious fix though I have a suspicion
  there's something more deliberate going on here, so consider this just
  a heads up!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394030] Re: big switch: optimized floating IP calls missing data

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394030

Title:
  big switch: optimized floating IP calls missing data

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  The newest version of the backend controller supports a floating IP
  API instead of propagating floating IP operations through full network
  updates. When testing with this new API, we found that the data is
  missing from the body on the Big Switch neutron plugin side so the
  optimized path doesn't work correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396932] Re: The hostname regex pattern doesn't match valid hostnames

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1396932

Title:
  The hostname regex pattern doesn't match valid hostnames

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  The regex used to match hostnames is opinionated, and it's opinions
  differ from RFC 1123 and RFC 952.

  The following hostnames will fail that are valid.

  6952x 
  openstack-1
  a1a
  x.x1x
  example.org.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1396932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391256] Re: rootwrap config files contain reference to deleted quantum binaries

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391256

Title:
  rootwrap config files contain reference to deleted quantum binaries

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  dhcp and l3 rootwrap filters contain reference to quantum-ns-metadata-
  proxy binary which has been deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393362] Re: linuxbridge agent is using too much memory

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393362

Title:
  linuxbridge agent is using too much memory

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  When vxlan is configured:

  $ ps aux | grep linuxbridge
  vagrant  21051  3.2 28.9 504764 433644 pts/3   S+   09:08   0:02 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

  
  A list with over 16 million numbers is created here:

   for segmentation_id in range(1, constants.MAX_VXLAN_VNI + 1):

  
https://github.com/openstack/neutron/blob/b5859998bc662569fee4b34fa079b4c37744de2c/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py#L526

  and does not seem to be garbage collected for some reason.

  Using xrange instead:

  $ ps -aux | grep linuxb
  vagrant   7397  0.1  0.9 106412 33236 pts/10   S+   09:19   0:05 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393435] Re: Subnet delete for IPv6 SLAAC should not require prior port disassoc

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393435

Title:
  Subnet delete for IPv6 SLAAC should not require prior port disassoc

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  With the current Neutron implementation, a subnet cannot be deleted
  until all associated IP addresses have been removed from ports (via
  port update) or the associated ports/VMs have been deleted.   

  In the case of SLAAC-enabled subnets, however, it's not feasible to
  require removal of SLAAC-generated addresses individually from each
  associated port before deleting a subnet because of the multicast
  nature of RA messages. For SLAAC-enabled subnets, the processing of
  subnet delete requests needs to be changed so that these subnets will
  be allowed to be deleted, and all ports get disassociated from their
  corresponding SLAAC IP address, when there are ports existing
  on the SLAAC subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384555] Re: SQL error during alembic.migration when populating Neutron database on MariaDB 10.0

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384555

Title:
  SQL error during alembic.migration when populating Neutron database on
  MariaDB 10.0

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New
Status in neutron package in Ubuntu:
  New

Bug description:
  On a fresh installation of Juno, it seems that that the database is
  not being populated correctly on a fresh install. This is the output
  of the log (I also demonstrated the DB had no tables to begin with):

  MariaDB [(none)]> use neutron
  Database changed
  MariaDB [neutron]> show tables;
  Empty set (0.00 sec)

  MariaDB [neutron]> quit
  Bye
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini current
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  Current revision for mysql://neutron:X@10.10.10.1/neutron: None
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini upgrade head
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade None -> havana, havana_initial
  INFO  [alembic.migration] Running upgrade havana -> e197124d4b9, add unique 
constraint to members
  INFO  [alembic.migration] Running upgrade e197124d4b9 -> 1fcfc149aca4, Add a 
unique constraint on (agent_type, host) columns to prevent a race condition 
when an agent entry is 'upserted'.
  INFO  [alembic.migration] Running upgrade 1fcfc149aca4 -> 50e86cb2637a, 
nsx_mappings
  INFO  [alembic.migration] Running upgrade 50e86cb2637a -> 1421183d533f, NSX 
DHCP/metadata support
  INFO  [alembic.migration] Running upgrade 1421183d533f -> 3d3cb89d84ee, 
nsx_switch_mappings
  INFO  [alembic.migration] Running upgrade 3d3cb89d84ee -> 4ca36cfc898c, 
nsx_router_mappings
  INFO  [alembic.migration] Running upgrade 4ca36cfc898c -> 27cc183af192, 
ml2_vnic_type
  INFO  [alembic.migration] Running upgrade 27cc183af192 -> 50d5ba354c23, ml2 
binding:vif_details
  INFO  [alembic.migration] Running upgrade 50d5ba354c23 -> 157a5d299379, ml2 
binding:profile
  INFO  [alembic.migration] Running upgrade 157a5d299379 -> 3d2585038b95, 
VMware NSX rebranding
  INFO  [alembic.migration] Running upgrade 3d2585038b95 -> abc88c33f74f, lb 
stats
  INFO  [alembic.migration] Running upgrade abc88c33f74f -> 1b2580001654, 
nsx_sec_group_mapping
  INFO  [alembic.migration] Running upgrade 1b2580001654 -> e766b19a3bb, 
nuage_initial
  INFO  [alembic.migration] Running upgrade e766b19a3bb -> 2eeaf963a447, 
floatingip_status
  INFO  [alembic.migration] Running upgrade 2eeaf963a447 -> 492a106273f8, 
Brocade ML2 Mech. Driver
  INFO  [alembic.migration] Running upgrade 492a106273f8 -> 24c7ea5160d7, Cisco 
CSR VPNaaS
  INFO  [alembic.migration] Running upgrade 24c7ea5160d7 -> 81c553f3776c, 
bsn_consistencyhashes
  INFO  [alembic.migration] Running upgrade 81c553f3776c -> 117643811bca, nec: 
delete old ofc mapping tables
  INFO  [alembic.migration] Running upgrade 117643811bca -> 19180cf98af6, 
nsx_gw_devices
  INFO  [alembic.migration] Running upgrade 19180cf98af6 -> 33dd0a9fa487, 
embrane_lbaas_driver
  INFO  [alembic.migration] Running upgrade 33dd0a9fa487 -> 2447ad0e9585, Add 
IPv6 Subnet properties
  INFO  [alembic.migration] Running upgrade 2447ad0e9585 -> 538732fa21e1, NEC 
Rename quantum_id to neutron_id
  INFO  [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051, n1kv 
segment allocs for cisco n1kv plugin
  INFO  [alembic.migration] Running upgrade 5ac1c354a051 -> icehouse, icehouse
  INFO  [alembic.migration] Running upgrade icehouse -> 54f7549a0e5f, 
set_not_null_peer_address
  INFO  [alembic.migration] Running upgrade 54f7549a0e5f -> 1e5dd1d09b22, 
set_not_null_fields_lb_stats
  INFO  [alembic.migration] Running upgrade 1e5dd1d09b22 -> b65aa907aec, 
set_length_of_protocol_field
  INFO  [alembic.migration] Running upgrade b65aa907aec -> 33c3db036fe4, 
set_length_of_description_field_metering
  INFO  [alembic.migration] Running upgrade 33c3db036fe4 -> 4eca4a84f08a, 
Remove ML2 Cisco Credentials DB
  INFO  [alembic.migration] Running upgrade 4eca4a84f08a -> d06e871c0d5, 
set_admin_state_up_not_null_ml2
  INFO  [alembic.migration] Running upgrade d06e871c0d5 -> 6be312499f9, 
set_not_null_vlan_id_cisco
  INFO  [alembic.migration] Running upgrade 6be312499f9 -> 1b837a7125a9, Cisco 
APIC Mechanism Driver
  INFO  [alembic.migration] Running upgrade 1b837a7125a9 -> 10cd28e692e9, 
nuage_extraroute
  INFO  [alembic.migration] Running upgrade 10cd28e692e9 -> 2db5203cb7a9, 
nuage_floatingip
  INFO  [alembic.migration] Running upgrade 2db5203cb7a9 -> 5446f2a45467, 
set_server_default
 

[Yahoo-eng-team] [Bug 1386932] Re: context.elevated: copy.copy causes admin role leak

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386932

Title:
  context.elevated: copy.copy causes admin role leak

Status in Cinder:
  Fix Committed
Status in Manila:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  In neutron/context.py,

  ```
  context = copy.copy(self)
  context.is_admin = True

  if 'admin' not in [x.lower() for x in context.roles]:
  context.roles.append('admin')
  ```

  copy.copy should be replaced by copy.deepcopy such that the list
  reference is not shared between objects. From my cursory search on
  github this also affects cinder, gantt, nova, neutron, and manila.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1386932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388716] Re: User cannot create HA L3 Router

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1388716

Title:
  User cannot create HA L3 Router

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  Currently, after modifying the policy.json a standard user cannot
  create a HA L3 router.

  This is caused by neutron attempting to create a new network without a tenant 
under the users context.
  All other operations with tenant-less  owners performed during the creation 
of the router will complete successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1388716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391326] Re: Remove openvswitch core plugin entry point

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391326

Title:
  Remove openvswitch core plugin entry point

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  The openvswitch core plugin has been removed[1] from neutron tree but
  not its entry point.

  setup.cfg:

  neutron.core_plugins =
openvswitch = 
neutron.plugins.openvswitch.ovs_neutron_plugin:OVSNeutronPluginV2

  
  [1] https://bugs.launchpad.net/neutron/+bug/1323729

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381886] Re: nova list show incorrect when neutron re-assign floatingip

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381886

Title:
  nova list show incorrect when neutron re-assign floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  boot more several instances, create a floatingip, when re-assign the 
floatingip to multi instances, nova list will show incorrect result.
  >>>neutron floatingip-associate floatingip-id instance0-pord-id
  >>>neutron floatingip-associate floatingip-id instance1-port-id
  >>>neutron floatingip-associate floatingip-id instance2-port-id
  >>>nova list
  (nova list result will be like:)
  --
  instance0  fixedip0,  floatingip
  instance1  fixedip1,  floatingip
  instance2  fixedip2,  floatingip

  instance0,1,2, they all have floatingip, but run "neutron floatingip-list", 
we can see it only bind to instance2.
  another situation is that after a few time(half a min, or longer), "nova 
list" can show correct result.
  ---
  instance0  fixedip0
  instance1  fixedip1
  instance2  fixedip2,  floatingip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378508] Re: KeyError in DHPC RPC when port_update happens.- this is seen when a delete_port event occurs

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378508

Title:
  KeyError in DHPC RPC when port_update happens.- this is seen when a
  delete_port event occurs

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  When there is a delete_port event occassionally we are seeing a TRACE
  in dhcp_rpc.py file.

  2014-10-07 12:31:39.803 DEBUG neutron.api.rpc.handlers.dhcp_rpc 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Update dhcp port {u'port': 
{u'network_id': u'12548499-8387-480e-b29c-625dbf320ecf', u'fixed_ips': 
[{u'subnet_id': u'88031ffe-9149-4e96-a022-65468f6bcc0e'}]}} from ubuntu. from 
(pid=4414) update_dhcp_port 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py:290
  2014-10-07 12:31:39.803 DEBUG neutron.openstack.common.lockutils 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Got semaphore "db-access" 
from (pid=4414) lock 
/opt/stack/neutron/neutron/openstack/common/lockutils.py:168
  2014-10-07 12:31:39.832 ERROR oslo.messaging.rpc.dispatcher 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Exception during message 
handling: 'network_id'
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 294, in 
update_dhcp_port
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 'update_port')
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 81, in 
_port_action
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher net_id = 
port['port']['network_id']
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher KeyError: 
'network_id'
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 
  2014-10-07 12:31:39.833 ERROR oslo.messaging._drivers.common 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Returning exception 
'network_id' to caller
  2014-10-07 12:31:39.833 ERROR oslo.messaging._drivers.common 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] ['Traceback (most recent 
call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 294, in 
update_dhcp_port\n\'update_port\')\n', '  File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 81, in 
_port_action\nnet_id = port[\'port\'][\'network_id\']\n', "KeyError: 
'network_id'\n"]
  2014-10-07 12:31:39.839 DEBUG neutron.context 
[req-7d40234b-6e11-4645-9bab-8f9958df5064 None None] Arguments dropped when 
creating context: {u'project_name': None, u'tenant': None} from (pid=4414) 
__init__ /opt/stack/neutron/neutron/context.py:83

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382562] Re: security groups remote_group fails with CIDR in address pairs

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382562

Title:
  security groups remote_group fails with CIDR in address pairs

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Add a CIDR to allowed address pairs of a host. RPC calls from the
  agents will run into this issue now when retrieving the security group
  members' IPs. I haven't confirmed because I came across this working
  on other code, but I think this may stop all members of the security
  groups referencing that group from getting their rules over the RPC
  channel.

  
File "neutron/api/rpc/handlers/securitygroups_rpc.py", line 75, in 
security_group_info_for_devices
  return self.plugin.security_group_info_for_ports(context, ports)
File "neutron/db/securitygroups_rpc_base.py", line 202, in 
security_group_info_for_ports
  return self._get_security_group_member_ips(context, sg_info)
File "neutron/db/securitygroups_rpc_base.py", line 209, in 
_get_security_group_member_ips
  ethertype = 'IPv%d' % netaddr.IPAddress(ip).version
File 
"/home/administrator/code/neutron/.tox/py27/local/lib/python2.7/site-packages/netaddr/ip/__init__.py",
 line 281, in __init__
  % self.__class__.__name__)
  ValueError: IPAddress() does not support netmasks or subnet prefixes! See 
documentation for details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379609] Re: Cisco N1kv: Fix add-tenant in update network profile

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379609

Title:
  Cisco N1kv: Fix add-tenant in update network profile

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  During cisco-network-profile-update, if a tenant id is being added to the 
network profile, the current behavior is to remove all the tenant-network 
profile bindings and add the new list of tenants. This works well with horizon 
since all the existing tenant UUIDs, along with the new tenant id, are passed 
during update network profile.
  If you try to update a network profile and add new tenant to the network 
profile via CLI, this will replace the existing tenant-network profile bindings 
and add only the new one.

  Expected behavior is to not delete the existing tenant bindings and
  instead only add new tenants to the list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384487] Re: big switch server manager uses SSLv3

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384487

Title:
  big switch server manager uses SSLv3

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  The communication with the backend is done using the default protocol
  of ssl.wrap_socket, which is SSLv3. This protocol is vulnerable to the
  Poodle attack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379510] Re: Big Switch: sync is not retried after failure

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379510

Title:
  Big Switch: sync is not retried after failure

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  If the topology sync fails, no other sync attempts will be made
  because the server manager clears the hash from the DB before the sync
  operation. It shouldn't do this because the backend ignores the hash
  on a sync anyway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382076] Re: Can not add router interface to SLAAC network

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382076

Title:
  Can not add router interface to SLAAC network

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  Looks like after resolving of
  https://bugs.launchpad.net/neutron/+bug/1330826 it's impossible now to
  connect router to subnet with SLAAC addressing.

  Steps to reproduce:

  $ neutron net-create netto
  $ neutron subnet-create --ip_version 6 --ipv6-address-mode slaac 
--ipv6-ra-mode slaac netto 2021::/64
  $ neutron router-create netrouter
  $ neutron router-interface-add {router_id} {subnet_id}

  The error is:
  Invalid input for operation: IPv6 address 2021::1 can not be directly 
assigned to a port on subnet 8cc737a7-bac1-4fbc-b03d-dfdac7194c08 with slaac 
address mode.

  ** The same behaviour if you set gateway explicitly to fixed IP address like 
2022::7:
  $ neutron subnet-create --ip_version 6 --gateway 2022::7 --ipv6-address-mode 
slaac --ipv6-ra-mode slaac netto 2022::/64
  $ neutron router-interface-add  {router_id} {subnet_id}
  Invalid input for operation: IPv6 address 2022::7 can not be directly 
assigned to a port on subnet f4ebf914-9749-49e4-9498-5c10c7bf9a5d with slaac 
address mode.

  *** The same behaviour if we use dhcpv6-stateless instead of SLAAC.

  1. It should be legal possibility to add port with fixed IP to 
SLAAC/stateless networks.
  2. When router add its interface to SLAAC subnet it should receive its own 
SLAAC address by default, if fixed IP address is not specified explicitly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377350] Re: BSN: inconsistency when backend missing during delete

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377350

Title:
  BSN: inconsistency when backend missing during delete

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  When objects are deleted in ML2 and there is a failure in a driver in
  post-commit. There is no retry mechanism to delete that object from
  with the driver at a later time.[1] This means that objects deleted
  while there is no connectivity to the backend controller will never be
  deleted until another even causes a synchronization.


  1.
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1039

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323729] Re: Remove Open vSwitch and Linuxbridge plugins from the Neutron tree

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323729

Title:
  Remove Open vSwitch and Linuxbridge plugins from the Neutron tree

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  New
Status in neutron package in Ubuntu:
  Fix Released

Bug description:
  This bug will track the removal of the Open vSwitch and Linuxbridge
  plugins from the Neutron source tree. These were deprecated in
  Icehouse and will be removed before Juno releases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1323729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330826] Re: Neutron network:dhcp port is not assigned EUI64 IPv6 address for SLAAC subnet

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330826

Title:
  Neutron network:dhcp port is not assigned EUI64 IPv6 address for SLAAC
  subnet

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  In an IPv6 subnet which has ipv6_address_mode set to slaac or 
dhcpv6-stateless, Neutron should use EUI-64 address assignment for all the 
addresses. Also if a fixed IP address is specified for such a subnet, we should 
report an appropriate error message during port creation or port update 
operation. 
   
  A simple scenario to reproduce this issue...

  #As an admin user, create a provider network and associate an IPv4 and IPv6 
subnet.
  cd ~/devstack
  source openrc admin admin
  neutron net-create N-ProviderNet --provider:physical_network=ipv6-net 
--provider:network_type=flat --shared
  neutron subnet-create --name N-ProviderSubnet N-ProviderNet 20.1.1.0/24  
--gateway 20.1.1.1 --allocation-pool start=20.1.1.100,end=20.1.1.150
  neutron subnet-create --name N-ProviderSubnetIPv6 --ip_version 6 
--ipv6-address-mode slaac --gateway fe80::689d:41ff:fe20:44ca N-ProviderNet 
2001:1:2:3::/64

  As a normal tenant, launch a VM with the provider net-id. You could
  see that ipAddress assigned to dhcp port is "2001:1:2:3::1" which is
  not an EUI64 based address.

  sridhar@ControllerNode:~/devstack$ neutron port-list -F mac_address -F 
fixed_ips
  
+---+---+
  | mac_address   | fixed_ips   
  |
  
+---+---+
  | fa:16:3e:6a:db:6f | {"subnet_id": "61d2661d-22a0-449c-8823-b4d781515f66", 
"ip_address": "172.24.4.2"} |
  | fa:16:3e:54:56:13 | {"subnet_id": "3e3487de-036c-4ab7-ba3f-c6b5db041fb2", 
"ip_address": "20.1.1.101"} |
  |   | {"subnet_id": "716234df-1f46-434c-be48-d976a86438d6", 
"ip_address": "2001:1:2:3::1"}  |
  | fa:16:3e:dd:e9:82 | {"subnet_id": "61d2661d-22a0-449c-8823-b4d781515f66", 
"ip_address": "172.24.4.4"} |
  | fa:16:3e:52:1f:43 | {"subnet_id": "fbad7350-83c4-4cad-aa95-fecac232cea1", 
"ip_address": "10.0.0.101"} |
  | fa:16:3e:8a:f0:b6 | {"subnet_id": "61d2661d-22a0-449c-8823-b4d781515f66", 
"ip_address": "172.24.4.3"} |
  | fa:16:3e:02:d2:50 | {"subnet_id": "fbad7350-83c4-4cad-aa95-fecac232cea1", 
"ip_address": "10.0.0.1"}   |
  | fa:16:3e:45:5c:00 | {"subnet_id": "3e3487de-036c-4ab7-ba3f-c6b5db041fb2", 
"ip_address": "20.1.1.102"} |
  |   | {"subnet_id": "716234df-1f46-434c-be48-d976a86438d6", 
"ip_address": "2001:1:2:3:f816:3eff:fe45:5c00"} |
  
+---+---+

  sridhar@ControllerNode:~/devstack$ sudo ip netns exec 
qdhcp-93093763-bc7d-4be4-91ad-0ef9ba69273c ifconfig
  tap4828cfbd-fe Link encap:Ethernet  HWaddr fa:16:3e:54:56:13
    inet addr:20.1.1.101  Bcast:20.1.1.255  Mask:255.255.255.0
    inet6 addr: 2001:1:2:3:f816:3eff:fe54:5613/64 Scope:Global
    inet6 addr: 2001:1:2:3::1/64 Scope:Global
    inet6 addr: fe80::f816:3eff:fe54:5613/64 Scope:Link
    UP BROADCAST RUNNING  MTU:1500  Metric:1
    RX packets:337 errors:0 dropped:0 overruns:0 frame:0
    TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:37048 (37.0 KB)  TX bytes:3936 (3.9 KB)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367500] Re: IPv6 network doesn't create namespace, dhcp port

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367500

Title:
  IPv6 network doesn't create namespace, dhcp port

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  IPv6 networking has been changed during last commits. 
  Create network and IPv6 subnet with default settings. Create port in the 
network:
  it doesn't create any namespace, it doesn't create DHCP port in the subnet, 
although port get IP from DHCP server.
  Although IPv4 networking continues to work as required.

  $ neutron net-create netto
  Created a new network:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | 849b4dbf-0914-4cfb-956b-e0cc5d8054ab |
  | name| netto|
  | router:external | False|
  | shared  | False|
  | status  | ACTIVE   |
  | subnets |  |
  | tenant_id   | 5664b23312504826818c9cb130a9a02f |
  +-+--+

  $ neutron subnet-create --ip-version 6 netto 2011::/64
  Created a new subnet:
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | allocation_pools  | {"start": "2011::2", "end": 
"2011:::::fffe"} |
  | cidr  | 2011::/64   
 |
  | dns_nameservers   | 
 |
  | enable_dhcp   | True
 |
  | gateway_ip| 2011::1 
 |
  | host_routes   | 
 |
  | id| e10300d1-194f-4712-b2fc-2107ac3fe909
 |
  | ip_version| 6   
 |
  | ipv6_address_mode | 
 |
  | ipv6_ra_mode  | 
 |
  | name  | 
 |
  | network_id| 849b4dbf-0914-4cfb-956b-e0cc5d8054ab
 |
  | tenant_id | 5664b23312504826818c9cb130a9a02f
 |
  
+---+--+

  $ neutron port-create netto
  Created a new port:
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:vnic_type | normal  
   |
  | device_id | 
   |
  | device_owner  | 
   |
  | fixed_ips | {"subnet_id": 
"e10300d1-194f-4712-b2fc-2107ac3fe909", "ip_address": "2011::2"} |
  | id| 175eaa91-441e-48df-9267-bc7fc808dce8
   |
  | mac_address   | fa:16:3e:26:51:79   
   |
  | name  | 
   |
  | network_id| 849b4dbf-0914-4cfb-956b-e0cc5d8054ab
   |
  | security_groups   | c7756502-5eda-4f43-9977-21cfb73b4d4e
   |
  | status| DOWN
   |
  | tenant_id | 5664b23312504826818c9cb130a9a02f
   |
  
+---++

  $ neutron port-list | grep e10300

[Yahoo-eng-team] [Bug 1365806] Re: Noopfirewall driver or security group disabled should avoid impose security group related calls to Neutron server

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365806

Title:
  Noopfirewall driver or security group disabled should avoid impose
  security group related calls to Neutron server

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  With openvswitch neutron agent, during the daemon loop, the phase for
  setup_port_filters will try to grab/call method
  'security_group_rules_for_devices'  to Neutron Server.

  And this operation will be very time consuming and have big
  performance bottleneck as it include ports query,  rules query,
  network query as well as reconstruct the huge Security groups Dict
  Message.  This message size is very large and for processing it, it
  will occupy a lot of CPU of Neutron Server. In cases like VM/perhost
  arrive to 700, the Neutron server will be busy doing the message and
  couldn't to do other thing and this could lead to message queue
  connection timeout and make queue disconnect the consumers. As a
  result the Neutron server is crashed and not function either for
  deployments or for API calls.

  For the Noopfirewall or security group disabled situation, this
  operation should be avoided. Because eventually these reply message
  would not be used by Noopfirewall driver.  (There methods are pass).

   with self.firewall.defer_apply():
  for device in devices.values():
  LOG.debug(_("Update port filter for %s"), device['device'])
  self.firewall.update_port_filter(device)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374556] Re: SQL lookups to get port details should be batched

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374556

Title:
  SQL lookups to get port details should be batched

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  The RPC handler for looking up security group details for ports does
  it one port at a time, which means an individual SQL query with a join
  for every port on a compute node, which could be 100+ in a heavily
  subscribed environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336624] Re: Libvirt driver cannot avoid ovs_hybrid

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336624

Title:
  Libvirt driver cannot avoid ovs_hybrid

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  This bug is related to Nova and Neutron.

  Libvirt driver cannot avoid ovs_hybrid though if NoopFirewallDriver is
  selected, while using LibvirtGenericVIFDriver at Nova and ML2+OVS at
  Neutron.

  Since Nova follows "binding:vif_detail" from Neutron [1], that is
  intended behavior. OVS mech driver in Neutron always return the
  following vif_detail:

vif_details: {
  "port_filter": true,
  "ovs_hybrid_plug": true,
}

  So, Neutron is right place to configure to avoid ovs_hybrid plugging.
  I think we can set ovs_hybrid_plug=False in OVS mech driver if
  security_group is disabled.

  [1] https://review.openstack.org/#/c/83190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373547] Re: Cisco N1kv: Remove unnecessary REST call to delete VM network on controller

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373547

Title:
  Cisco N1kv: Remove unnecessary REST call to delete VM network on
  controller

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  Remove the rest call to delete vm network on the controller and ensure
  that database remains consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373851] Re: security groups db queries load excessive data

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373851

Title:
  security groups db queries load excessive data

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  The security groups db queries are loading extra data from the ports
  table that is unnecessarily hindering performance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375698] Re: mlnx agent throws exception - "unbound method sleep()"

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375698

Title:
  mlnx agent throws exception - "unbound method sleep()"

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  Traceback:
  2014-09-30 13:28:53.603 529 DEBUG neutron.plugins.mlnx.agent.utils 
[req-a040c7ec-9a06-4f85-9420-60d6c4ca376e None] get_attached_vnics 
get_attached_vnics /opt/stack/neutron/neutron/plugins/mlnx/agent/utils.py:81
  2014-09-30 13:28:56.608 529 DEBUG neutron.plugins.mlnx.common.comm_utils 
[req-a040c7ec-9a06-4f85-9420-60d6c4ca376e None] Request timeout - call again 
after 3 seconds decorated 
/opt/stack/neutron/neutron/plugins/mlnx/common/comm_utils.py:58
  2014-09-30 13:28:56.608 529 CRITICAL neutron 
[req-a040c7ec-9a06-4f85-9420-60d6c4ca376e None] TypeError: unbound method 
sleep() must be called with RetryDecorator instance as first argument (got int 
instance instead)
  2014-09-30 13:28:56.608 529 TRACE neutron Traceback (most recent call last):
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 426, in 
  2014-09-30 13:28:56.608 529 TRACE neutron main()
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 421, in main
  2014-09-30 13:28:56.608 529 TRACE neutron agent.daemon_loop()
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 373, in daemon_loop
  2014-09-30 13:28:56.608 529 TRACE neutron port_info = 
self.scan_ports(previous=port_info, sync=sync)
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 247, in scan_ports
  2014-09-30 13:28:56.608 529 TRACE neutron cur_ports = 
self.eswitch.get_vnics_mac()
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 63, in get_vnics_mac
  2014-09-30 13:28:56.608 529 TRACE neutron return 
set(self.utils.get_attached_vnics().keys())
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/mlnx/agent/utils.py", line 83, in 
get_attached_vnics
  2014-09-30 13:28:56.608 529 TRACE neutron vnics = self.send_msg(msg)
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/mlnx/common/comm_utils.py", line 59, in 
decorated
  2014-09-30 13:28:56.608 529 TRACE neutron 
RetryDecorator.sleep_fn(sleep_interval)
  2014-09-30 13:28:56.608 529 TRACE neutron TypeError: unbound method sleep() 
must be called with RetryDecorator instance as first argument (got int instance 
instead)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1375698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367034] Re: NSX: prevents creating multiple networks same vlan but different physical network

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367034

Title:
  NSX: prevents creating multiple networks same vlan but different
  physical network

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  NSX: prevents creating multiple networks same vlan but different
  physical network

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372438] Re: Race condition in l2pop drops tunnels

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372438

Title:
  Race condition in l2pop drops tunnels

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  The issue was originally raised by a Red Hat performance engineer (Joe
  Talerico)  here: https://bugzilla.redhat.com/show_bug.cgi?id=1136969
  (see starting from comment 4).

  Joe created a Fedora instance in his OS cloud based on RHEL7-OSP5
  (Icehouse), where he installed Rally client to run benchmarks against
  that cloud itself. He assigned a floating IP to that instance to be
  able to access API endpoints from inside the Rally machine. Then he
  ran a scenario which basically started up 100+ new instances in
  parallel, tried to access each of them via ssh, and once it succeeded,
  clean up each created instance (with its ports). Once in a while, his
  Rally instance lost connection to outside world. This was because
  VXLAN tunnel to the compute node hosting the Rally machine was dropped
  on networker node where DHCP, L3, Metadata agents were running. Once
  we restarted OVS agent, the tunnel was recreated properly.

  The scenario failed only if L2POP mechanism was enabled.

  I've looked thru the OVS agent logs and found out that the tunnel was
  dropped due to a legitimate fdb entry removal request coming from
  neutron-server side. So the fault is probably on neutron-server side,
  in l2pop mechanism driver.

  I've then looked thru the patches in Juno to see whether there is
  something related to the issue already merged, and found the patch
  that gets rid of _precommit step when cleaning up fdb entries. Once
  we've applied the patch on the neutron-server node, we stopped to
  experience those connectivity failures.

  After discussion with Vivekanandan Narasimhan, we came up with the
  following race condition that may result in tunnels being dropped
  while legitimate resources are still using them:

  (quoting Vivek below)

  '''
  - - port1 delete request comes in;
  - - port1 delete request acquires lock
  - - port2 create/update request comes in;
  - - port2 create/update waits on due to unavailability of lock
  - - precommit phase for port1 determines that the port is the last one, so we 
should drop the FLOODING_ENTRY;
  - - port1 delete applied to db;
  - - port1 transaction releases the lock
  - - port2 create/update acquires the lock
  - - precommit phase for port2 determines that the port is the first one, so 
request FLOODING_ENTRY + MAC-specific flow creation;
  - - port2 create/update request applied to db;
  - - port2 transaction releases the lock

  Now at this point postcommit of either of them could happen, because 
code-pieces operate outside the
  locked zone.  

  If it happens, this way, tunnel would retain:

  - - postcommit phase for port1 requests FLOODING_ENTRY deletion due to port1 
deletion
  - - postcommit phase requests FLOODING_ENTRY + MAC-specific flow creation for 
port2;

  If it happens the below way, tunnel would break:
  - - postcommit phase for create por2 requests FLOODING_ENTRY + MAC-specific 
flow 
  - - postcommit phase for delete port1 requests FLOODING_ENTRY deletion
  '''

  We considered the patch to get rid of precommit for backport to
  Icehouse [1] that seems to eliminate the race, but we're concerned
  that we reverted that to previous behaviour in Juno as part of DVR
  work [2], though we haven't done any testing to check whether the
  issue is present in Juno (though brief analysis of the code shows that
  it should fail there too).

  Ideally, the fix for Juno should be easily backportable because the
  issue is currently present in Icehouse, and we would like to have the
  same fix for both branches (Icehouse and Juno) instead of backporting
  patch [1] to Icehouse and implementing another patch for Juno.

  [1]: https://review.openstack.org/#/c/95165/
  [2]: https://review.openstack.org/#/c/102398/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377241] Re: Lock wait timeout on delete port for DVR

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377241

Title:
  Lock wait timeout on delete port for DVR

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  We run a script to configure networks, VMs, Routers and assigin floatingIP to 
the VM.
  After it is created, then we run a script to clean all ports, networks, 
routers and gateway and FIP.

  The issue is seen when there is a back to back call to router-
  interface-delete and router-gateway-clear.

  There are three calls to router-interface-delete and the fourth call
  to router-gateway-clear.

  At this time there is a db lock obtained for port delete and when the
  other delete comes in, it timeout.

  
  2014-10-03 09:28:39.587 DEBUG neutron.openstack.common.lockutils 
[req-a89ee05c-d8b2-438a-a707-699f450d3c41 admin 
d3bb4e1791814b809672385bc8252688] Got semaphore "db-access" from (pid=25888) 
lock /opt/stack/neutron/neutron/openstack/common/lockutils.py:168
  2014-10-03 09:29:30.777 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54899)
  2014-10-03 09:29:30.778 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54900)
  2014-10-03 09:29:30.778 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54901)
  2014-10-03 09:29:30.778 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54902)
  2014-10-03 09:29:30.780 ERROR neutron.api.v2.resource 
[req-a89ee05c-d8b2-438a-a707-699f450d3c41 admin 
d3bb4e1791814b809672385bc8252688] remove_router_interface failed
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 200, in _handle_action
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 247, in 
remove_router_interface
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource context.elevated(), 
router, subnet_id=subnet_id)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 557, in 
delete_csnat_router_interface_ports
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource l3_port_check=False)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 983, in delete_port
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource port_db, binding = 
db.get_locked_port_and_binding(session, id)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/db.py", line 135, in 
get_locked_port_and_binding
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource 
with_lockmode('update').
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2310, in one
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource ret = list(self)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2353, in 
__iter__
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2368, in 
_execute_and_instances
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, in 
execute
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource params)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource context)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py",
 line 125, in _handle_dbapi_exception
  2014-10-03 09:29:30.780 TRACE neutron.a

[Yahoo-eng-team] [Bug 1369239] Re: OpenDaylight MD should not ignore 400 errors

2014-12-04 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369239

Title:
  OpenDaylight MD should not ignore 400 errors

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  400 (Bad Request) errors are ignored in every create or update
  operation to OpenDaylight. Referring to the comment, it protects
  against conflicts with already existing resources.

  In case of update operations, it seems irrelevant and masks "real" bad
  requests. It could also be removed in create operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils

2014-12-04 Thread Ian Cordasco
This was entirely fixed by Zhongyue Luo. All references to the
attributes and functions to be removed from timeutils exist solely in
the timeutils module in Glance.

** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266962

Title:
  Remove set_time_override in timeutils

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in Gantt:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  Fix Released
Status in Oslo utility library:
  In Progress
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Nova:
  Fix Released
Status in Tuskar:
  Fix Released
Status in OpenStack Messaging and Notifications Service (Zaqar):
  Fix Released

Bug description:
  set_time_override was written as a helper function to mock utcnow in
  unittests.

  However we now use mock or fixture to mock our objects so
  set_time_override has become obsolete.

  We should first remove all usage of set_time_override from downstream
  projects before deleting it from oslo.

  List of attributes and functions to be removed from timeutils:
  * override_time
  * set_time_override()
  * clear_time_override()
  * advance_time_delta()
  * advance_time_seconds()

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385484] Re: Failed to start nova-compute after evacuate

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385484

Title:
  Failed to start nova-compute after evacuate

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  After evacuated successfully, and restarting the failed host to get it
  back. User will run into below error.


  <179>Sep 23 01:48:35 node-1 nova-compute 2014-09-23 01:48:35.346 13206 ERROR 
nova.openstack.common.threadgroup [-] error removing image
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
117, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
49, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 483, 
in run_service
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 163, in start
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1018, in 
init_host
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._destroy_evacuated_instances(context)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 712, in 
_destroy_evacuated_instances
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
bdi, destroy_disks)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 962, in 
destroy
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
destroy_disks, migrate_data)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1080, in 
cleanup
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._cleanup_rbd(instance)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1090, in 
_cleanup_rbd
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
LibvirtDriver._get_rbd_driver().cleanup_volumes(instance)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py", line 238, in 
cleanup_volumes
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.rbd.RBD().remove(client.ioctx, volume)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/rbd.py", line 300, in remove
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
raise make_ex(ret, 'error re

[Yahoo-eng-team] [Bug 1394551] Re: Legacy GroupAffinity and GroupAntiAffinity filters are broken

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394551

Title:
  Legacy GroupAffinity and GroupAntiAffinity filters are broken

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  Both GroupAffinity and GroupAntiAffinity filters are broken. The
  scheduler does not respect the filters and schedules the servers
  against the policy.

  Reproduction steps:
  0) Spin up a single node devstack 
  1) Add GroupAntiAffinityFilter to  scheduler_default_filters in nova.conf and 
restart the nova services
  2) Boot multiple server with the following command 
  nova boot --image cirros-0.3.2-x86_64-uec --flavor 42 --hint group=foo 
server-1

  Expected behaviour:
  The second and any further boot should fail with NoValidHostFound exception 
as anti-affinity policy cannot be fulfilled.

  Actual behaviour:
  Any number of servers are booted to the same compute node

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394052] Re: Fix exception handling in _get_host_metrics()

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394052

Title:
  Fix exception handling in _get_host_metrics()

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  In resource_tracker.py, the exception path of _get_host_metrics()
  contains a wrong variable name.

  for monitor in self.monitors:
  try:
  metrics += monitor.get_metrics(nodename=nodename)
  except Exception:
  LOG.warn(_("Cannot get the metrics from %s."), monitors)   
<-- Need to change 'monitors' to 'monitor'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380792] Re: requests to EC2 metadata's '/2009-04-04/meta-data/security-groups' failing

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380792

Title:
  requests to EC2 metadata's '/2009-04-04/meta-data/security-groups'
  failing

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in nova package in Ubuntu:
  Confirmed

Bug description:
  Just did a distro upgrade to juno rc2.. Running an old nova-network
  cloud with mult-host, nova-api running on compute host.  Noticed
  ubuntu instances cloud-init is failing:

  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 177, in 
retry_url
  resp = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
  return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 406, in open
  response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 519, in http_response
  'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 444, in error
  return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 527, in http_error_default
  raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)

  Looking at nova-api.log on compute, webob is throwing an exception:

  2014-10-13 13:47:37.468 9183 INFO nova.metadata.wsgi.server 
[req-e133f95b-5f99-41e5-89dc-8e35b41f7cd6 None] 10.0.0.6 "GET 
/2009-04-04/meta-data/security-groups HTTP/1.1" status: 400 len: 265 time: 
0.2675409
  2014-10-13 13:48:41.947 9182 ERROR nova.api.ec2 
[req-47b84883-a48c-4004-914b-c983895a33be None] FaultWrapper: You cannot set 
Response.body to a text object (use Response.text)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 Traceback (most recent call 
last):
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py", line 87, in 
__call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 return 
req.get_response(self.application)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 application, 
catch_exc_info=False)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 resp = 
self.call_func(req, *args, **self.kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 return self.func(req, 
*args, **kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py", line 99, in 
__call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 rv = 
req.get_response(self.application)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 application, 
catch_exc_info=False)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 resp = 
self.call_func(req, *args, **self.kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 return self.func(req, 
*args, **kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py", line 136, in 
__call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 req.response.body = resp
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/response.py", line 373, in _body__set
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 raise TypeError(msg)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 Type

[Yahoo-eng-team] [Bug 1382318] Re: NoValidHost failure when trying to spawn instance with unicode name

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382318

Title:
  NoValidHost failure when trying to spawn instance with unicode name

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  Using the libvirt driver on Juno RC2 code, trying to create an
  instance with unicode name:

  
"\uff21\uff22\uff23\u4e00\u4e01\u4e03\u00c7\u00e0\u00e2\uff71\uff72\uff73\u0414\u0444\u044d\u0628\u062a\u062b\u0905\u0907\u0909\u20ac\u00a5\u5642\u30bd\u5341\u8c79\u7af9\u6577"

  Blows up:

  http://paste.openstack.org/show/121560/

  The libvirt config code shouldn't be casting values to str(), it
  should be using six.text_type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376933] Re: _poll_unconfirmed_resize timing window causes instance to stay in verify_resize state forever

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376933

Title:
  _poll_unconfirmed_resize timing window causes instance to stay in
  verify_resize state forever

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  If the _poll_unconfirmed_resizes periodic task runs in
  nova/compute/manager.py:ComputeManager._finish_resize() after the
  migration record has been updated in the database but before the
  instances has been updated.

  2014-09-30 16:15:00.897 112868 INFO nova.compute.manager [-] Automatically 
confirming migration 207 for instance 799f9246-bc05-4ae8-8737-4f358240f586
  2014-09-30 16:15:01.109 112868 WARNING nova.compute.manager [-] [instance: 
799f9246-bc05-4ae8-8737-4f358240f586] Setting migration 207 to error: In states 
stopped/resize_finish, not RESIZED/None

  This causes _poll_unconfirmed_resizes to see that the VM task_state is
  still 'resize_finish' instead of None, and set the migration record to
  error state. Which in turn causes the VM to be stuck in resizing
  forever.

  Two fixes have been proposed for this issue so far but were reverted
  because they caused other race conditions. See the following two bugs
  for more details.

  https://bugs.launchpad.net/nova/+bug/1321298
  https://bugs.launchpad.net/nova/+bug/1326778

  This timing issue still exists in Juno today in an environment with
  periodic tasks set to run once every 60 seconds and with a
  resize_confirm_window of 1 second.

  Would a possible solution for this be to change the code in
  _poll_unconfirmed_resizes() to ignore any VMs with a task state of
  'resize_finish' instead of setting the corresponding migration record
  to error? This is the task_state it should have right before changed
  to None in finish_resize(). Then next time _poll_unconfirmed_resizes()
  is called, the migration record will still be fetched and the VM will
  be checked again and in the updated vm_state/task_state.

  add the following in _poll_unconfirmed_resizes():

   # This removes a race condition
  if task_state == 'resize_finish':
  continue

  prior to: 
  elif vm_state != vm_states.RESIZED or task_state is not None:
  reason = (_("In states %(vm_state)s/%(task_state)s, not "
 "RESIZED/None") %
{'vm_state': vm_state,
 'task_state': task_state})
  _set_migration_to_error(migration, reason,
  instance=instance)
  continue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380624] Re: VMware: booting from a volume does not configure config driver if necessary

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380624

Title:
  VMware: booting from a volume does not configure config driver if
  necessary

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  When booting from a volume the config driver will not be mounted (if
  configured)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375467] Re: db deadlock on _instance_update()

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375467

Title:
  db deadlock on _instance_update()

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  continuing from the same pattern as that of
  https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
  unhandled deadlocks on derivatives of _instance_update(), such as the
  stacktrace below.  As _instance_update() is a point of transaction
  demarcation based on its use of get_session(), the @_retry_on_deadlock
  should be added to this method.

  Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply\
  incoming.message))\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch\
  return self._do_dispatch(endpoint, method, ctxt, args)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch\
  result = getattr(endpoint, method)(ctxt, **new_args)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 887, 
in instance_update\
  service)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 
139, in inner\
  return func(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 130, 
in instance_update\
  context, instance_uuid, updates)\
  File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 742, in 
instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 164, 
in wrapper\
  return f(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2208, 
in instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2299, 
in _instance_update\
  session.add(instance_ref)\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
447, in __exit__\
  self.rollback()\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", 
line 58, in __exit__\
  compat.reraise(exc_type, exc_value, exc_tb)\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
444, in __exit__\
  self.commit()\
  File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 443, in _wrap\
  _raise_if_deadlock_error(e, self.bind.dialect.name)\
  File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 427, in _raise_if_deadlock_error\
  raise exception.DBDeadlock(operational_error)\
  DBDeadlock: (OperationalError) (1213, \'Deadlock found when trying to get 
lock; try restarting transaction\') None None\

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340411] Re: Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral RBD

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340411

Title:
  Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral
  RBD

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  Greetings,

  
  We can't seem to be able to evacuate instances from a failed compute node 
using shared storage. We are using Ceph Ephemeral RBD as the storage medium.

  
  Steps to reproduce:

  nova evacuate --on-shared-storage 6e2081ec-2723-43c7-a730-488bb863674c node-24
  or
  POST  to http://ip-address:port/v2/tenant_id/servers/server_id/action with 
  {"evacuate":{"host":"node-24","onSharedStorage":1}}

  
  Here is what shows up in the logs:

  
  180>Jul 10 20:36:48 node-24 nova-nova.compute.manager AUDIT: Rebuilding 
instance
  <179>Jul 10 20:36:48 node-24 nova-nova.compute.manager ERROR: Setting 
instance vm_state to ERROR
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5554, 
in _error_out_instance_on_exception
  yield
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
  _("Invalid state of instance files on shared"
  InvalidSharedStorage: Invalid state of instance files on shared storage
  <179>Jul 10 20:36:49 node-24 nova-oslo.messaging.rpc.dispatcher ERROR: 
Exception during message handling: Invalid state of instance files on shared 
storage
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply
  incoming.message))
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in 
wrapped
  payload)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in 
wrapped
  return f(self, context, *args, **kw)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 274, 
in decorated_function
  pass
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 260, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, 
in decorated_function
  function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, 
in decorated_function
  e, sys.exc_info())
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
  _("Invalid state of instance files on shared"
  InvalidSharedStorage: Invalid state of instance files on shared storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370613] Re: InvalidHypervisorVirtType: Hypervisor virtualization type 'powervm' is not recognised

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370613

Title:
  InvalidHypervisorVirtType: Hypervisor virtualization type 'powervm' is
  not recognised

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in IBM PowerVC Driver for OpenStack:
  In Progress

Bug description:
  With these changes we have a list of known hypervisor types for
  scheduling:

  https://review.openstack.org/#/c/109591/
  https://review.openstack.org/#/c/109592/

  There is a powervc driver in stackforge (basically the replacement for
  the old powervm driver) which has a hypervisor type of 'powervm' and
  trying to boot anything against that fails in scheduling since the
  type is unknown.

  http://git.openstack.org/cgit/stackforge/powervc-driver/

  Seems like adding powervm to the list shouldn't be an issue given
  other things in that list like bhyve and phyp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357368] Re: Source side post Live Migration Logic cannot disconnect multipath iSCSI devices cleanly

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357368

Title:
  Source side post Live Migration Logic cannot disconnect multipath
  iSCSI devices cleanly

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  When a volume is attached to a VM in the source compute node through
  multipath, the related files in /dev/disk/by-path/ are like this

  stack@ubuntu-server12:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.50:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.a5-lun-24
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-24

  The information on its corresponding multipath device is like this
  stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:24 sdl 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:24 sdj 8:144 active undef running

  But when the VM is migrated to the destination, the related
  information is like the following example since we CANNOT guarantee
  that all nodes are able to access the same iSCSI portals and the same
  target LUN number. And the information is used to overwrite
  connection_info in the DB before the post live migration logic is
  executed.

  stack@ubuntu-server13:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b5-lun-100
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-100

  stack@ubuntu-server13:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:100 sdf 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:100 sdg 8:144 active undef running

  As a result, if post live migration in source side uses ,  and 
 to find the devices to clean up, it may use 192.168.3.51, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 100.
  However, the correct one should be 192.168.3.50, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 24.

  Similar philosophy in (https://bugs.launchpad.net/nova/+bug/1327497)
  can be used to fix it: Leverage the unchanged multipath_id to find
  correct devices to delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279172] Re: Unicode encoding error exists in extended Nova API, when the data contain unicode

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279172

Title:
  Unicode encoding error exists in extended Nova API, when the data
  contain unicode

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  We have developed an extended Nova API, the API query disks at first, then 
add a disk to an instance.
  After querying, if disk has non-english disk name, unicode will be converted 
to str in nova/api/openstack/wsgi.py line 451 
  "node = doc.createTextNode(str(data))", then unicode encoding error exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372049] Re: Launching multiple VMs fails over 63 instances

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372049

Title:
  Launching multiple VMs fails over 63 instances

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in Messaging API for OpenStack:
  Won't Fix

Bug description:
  RHEL-7.0
  Icehouse
  All-In-One

  Booting 63 VMs at once (with "num-instances" attribute) works fine.
  Setup is able to support up to 100 VMs booted in ~50 bulks.

  Booting 100 VMs at once, without Neutron network, so no network for
  the VMs, works fine.

  Booting 64 (and more) VMs boots only 63 VMs. any of the VMs over 63 are 
booted in ERROR state with details: VirtualInterfaceCreateException: Virtual 
Interface creation failed
  Failed VM's port at DOWN state

  Details:
  After the initial boot commands goes through, all CPU usage goes down (no 
neutron/nova CPU consumption) untll nova's vif_plugging_timeout is reached. at 
which point 1 (= #num_instances - 63) VM is set to ERROR, and the rest of the 
VMs reach active state.

  Guess: seems like neutron is going into some deadlock until some of
  the load is reduced by vif_plugging_timeout


  disabling neutorn-nova port notifications allows all VMs to be
  created.

  Notes: this is recreated also with multiple Compute nodes, and also
  multiple neutron RPC/API workers

  
  Recreate:
  set nova/neutron quota's to "-1"
  make sure neutorn-nova port notifications is ON on both neutron and nova conf 
files
  create a network in your tenant

  boot more than 64 VMs

  nova boot --flavor 42 test_VM --image cirros --num-instances 64


  [yfried@yfried-mobl-rh ~(keystone_demo)]$ nova list
  
+--+--+++-+-+
  | ID   | Name 
| Status | Task State | Power State | Networks|
  
+--+--+++-+-+
  | 02d7b680-efd8-4291-8d56-78b43c9451cb | 
test_VM-02d7b680-efd8-4291-8d56-78b43c9451cb | ACTIVE | -  | Running
 | demo_private=10.0.0.156 |
  | 05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | 
test_VM-05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | ACTIVE | -  | Running
 | demo_private=10.0.0.150 |
  | 09131f19-5e83-4a40-a900-ffca24a8c775 | 
test_VM-09131f19-5e83-4a40-a900-ffca24a8c775 | ACTIVE | -  | Running
 | demo_private=10.0.0.160 |
  | 0d3be93b-73d3-4995-913c-03a4b80ad37e | 
test_VM-0d3be93b-73d3-4995-913c-03a4b80ad37e | ACTIVE | -  | Running
 | demo_private=10.0.0.164 |
  | 0fcadae4-768c-44a1-9e1c-ac371d1803f9 | 
test_VM-0fcadae4-768c-44a1-9e1c-ac371d1803f9 | ACTIVE | -  | Running
 | demo_private=10.0.0.202 |
  | 11a87db1-5b15-4cad-a749-5d53e2fd8194 | 
test_VM-11a87db1-5b15-4cad-a749-5d53e2fd8194 | ACTIVE | -  | Running
 | demo_private=10.0.0.201 |
  | 147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | 
test_VM-147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | ACTIVE | -  | Running
 | demo_private=10.0.0.147 |
  | 1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | 
test_VM-1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | ACTIVE | -  | Running
 | demo_private=10.0.0.187 |
  | 1d0b7210-f5a0-4827-b338-2014e8f21341 | 
test_VM-1d0b7210-f5a0-4827-b338-2014e8f21341 | ACTIVE | -  | Running
 | demo_private=10.0.0.165 |
  | 1df564f6-5aac-4ac8-8361-bd44c305332b | 
test_VM-1df564f6-5aac-4ac8-8361-bd44c305332b | ACTIVE | -  | Running
 | demo_private=10.0.0.145 |
  | 2031945f-6305-4cdc-939f-5f02171f82b2 | 
test_VM-2031945f-6305-4cdc-939f-5f02171f82b2 | ACTIVE | -  | Running
 | demo_private=10.0.0.149 |
  | 256ff0ed-0e56-47e3-8b69-68006d658ad6 | 
test_VM-256ff0ed-0e56-47e3-8b69-68006d658ad6 | ACTIVE | -  | Running
 | demo_private=10.0.0.177 |
  | 2b7256a8-c04a-42cf-9c19-5836b585c0f5 | 
test_VM-2b7256a8-c04a-42cf-9c19-5836b585c0f5 | ACTIVE | -  | Running
 | demo_private=10.0.0.180 |
  | 2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | 
test_VM-2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | ACTIVE | -  | Running
 | demo_private=10.0.0.191 |
  | 425c170f-a450-440d-b9ba-0408d7c69b25 | 
test_VM-425c170f-a450-440d-b9ba-0408d7c69b25 | ACTIVE | -  | Running
 | demo_private=10.0.0.169 |
  | 461fcce3-96ae-4462-ab65-fb63f3552703 | 
test_VM-461fcce3-96ae-4462-ab65-fb63f3552703 | ACTIVE | -  | Running
 | demo_private=10.0.0.179 |
  | 46a9965d-6511-44a3-ab71-a87767cda759 | 

[Yahoo-eng-team] [Bug 1249319] Re: evacuate on ceph backed volume fails

2014-12-04 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249319

Title:
  evacuate on ceph backed volume fails

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  When using nova evacuate to move an instance from one compute host to
  another, the command silently fails. The issue seems to be that the
  rebuild process builds an incorrect libvirt.xml file that no longer
  correctly references the ceph volume.

  Specifically under the  section I see:

  

  where in the original libvirt.xml the file was:

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399498] [NEW] centos 7 unit test fails

2014-12-04 Thread Yukihiro KAWADA
Public bug reported:

centos 7 unit test fails.

to pass this test:
export OPENSSL_ENABLE_MD5_VERIFY=1
export NSS_HASH_ALG_SUPPORT=+MD5 


# ./run_tests.sh -V -s nova.tests.unit.test_crypto.X509Test
Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  nova.tests.unit.test_crypto.X509Test'`
nova.tests.unit.test_crypto.X509Test
test_encrypt_decrypt_x509 OK  2.73
test_can_generate_x509FAIL

Slowest 2 tests took 6.24 secs:
nova.tests.unit.test_crypto.X509Test
test_can_generate_x5093.51
test_encrypt_decrypt_x509 2.73

==
FAIL: nova.tests.unit.test_crypto.X509Test.test_can_generate_x509
--

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399498

Title:
  centos 7 unit test fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  centos 7 unit test fails.

  to pass this test:
  export OPENSSL_ENABLE_MD5_VERIFY=1
  export NSS_HASH_ALG_SUPPORT=+MD5 

  
  # ./run_tests.sh -V -s nova.tests.unit.test_crypto.X509Test
  Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  nova.tests.unit.test_crypto.X509Test'`
  nova.tests.unit.test_crypto.X509Test
  test_encrypt_decrypt_x509 OK  2.73
  test_can_generate_x509FAIL

  Slowest 2 tests took 6.24 secs:
  nova.tests.unit.test_crypto.X509Test
  test_can_generate_x5093.51
  test_encrypt_decrypt_x509 2.73

  ==
  FAIL: nova.tests.unit.test_crypto.X509Test.test_can_generate_x509
  --

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399523] [NEW] PEP8 max-complexity should be reduced from 33

2014-12-04 Thread Akihiro Motoki
Public bug reported:

pep8 max-complexity 33 means we have too too large methods.
It should be reduced at least <20 so that we have healthy sized methods.

** Affects: horizon
 Importance: Low
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399523

Title:
  PEP8 max-complexity should be reduced from 33

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  pep8 max-complexity 33 means we have too too large methods.
  It should be reduced at least <20 so that we have healthy sized methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399525] [NEW] Juno:port update with no security-group makes tenant VM's not accessible.

2014-12-04 Thread Thalabathy
Public bug reported:

Setup:
+
Ubuntu 14.04

Steps to reproduce:


teps to reproduce:

1. create working juno setup(single node dev-stack)ubuntu(14.04 server).
2. create custom security-group test with icmp ingress allowed.
3. create network with subnet to spawn tenant VM.
4. spawn a tenant vm with created security-group and network.
5. Ensure Vm able to ping from dhcp namespace.
5. Create floatingip and associate to the VM port.
6. Try to ping the VM from public network(i.e. floating subnet) <== VM able to 
ping since ufw disabled and icmp rule associated to the port.
7. Update the VM port with no-security-groups and then try to ping VM's 
floatingip.
8. VM ip not pinging, but it should ping because VM port unplugged from the 
ovs-firewall driver and it falls under system iptabel

expected: it should ping because the compute ufw disabled.

Reference:
+
port_id:bd89a24b-eeaf-41f6-a97b-54d65263052d
VM_id:392b62a1-dd75-4d23-9296-978ef4630caf
Sec_group:d6c08ecf-eb66-410d-a763-75f9a707fd89

IP-TABLE:
+++

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399525

Title:
  Juno:port update with no security-group makes tenant VM's not
  accessible.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Setup:
  +
  Ubuntu 14.04

  Steps to reproduce:
  

  teps to reproduce:

  1. create working juno setup(single node dev-stack)ubuntu(14.04 server).
  2. create custom security-group test with icmp ingress allowed.
  3. create network with subnet to spawn tenant VM.
  4. spawn a tenant vm with created security-group and network.
  5. Ensure Vm able to ping from dhcp namespace.
  5. Create floatingip and associate to the VM port.
  6. Try to ping the VM from public network(i.e. floating subnet) <== VM able 
to ping since ufw disabled and icmp rule associated to the port.
  7. Update the VM port with no-security-groups and then try to ping VM's 
floatingip.
  8. VM ip not pinging, but it should ping because VM port unplugged from the 
ovs-firewall driver and it falls under system iptabel

  expected: it should ping because the compute ufw disabled.

  Reference:
  +
  port_id:bd89a24b-eeaf-41f6-a97b-54d65263052d
  VM_id:392b62a1-dd75-4d23-9296-978ef4630caf
  Sec_group:d6c08ecf-eb66-410d-a763-75f9a707fd89

  IP-TABLE:
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399527] Re: Juno:nova error message not matching to relevant to the issue.

2014-12-04 Thread Thalabathy
root@THALA-DEVSTACK:~# nova show 14dfbee2-e702-45b3-8029-5064e8a20683
+--+--+
| Property | Value  

  |
+--+--+
| OS-DCF:diskConfig| MANUAL 

  |
| OS-EXT-AZ:availability_zone  | nova   

  |
| OS-EXT-SRV-ATTR:host | THALA-DEVSTACK 

  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | THALA-DEVSTACK 

  |
| OS-EXT-SRV-ATTR:instance_name| instance-00da  

  |
| OS-EXT-STS:power_state   | 0  

  |
| OS-EXT-STS:task_state| -  

  |
| OS-EXT-STS:vm_state  | error  

  |
| OS-SRV-USG:launched_at   | -  

  |
| OS-SRV-USG:terminated_at | -  

  |
| accessIPv4   |

  |
| accessIPv6   |

  |
| config_drive |

  |
| created  | 2014-12-05T04:53:00Z   

  |
| fault| {"message": "No valid host was found. 
", "code": 500, "details": "  File 
\"/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py\", line 
108, in schedule_run_instance |
|  | raise 
exception.NoValidHost(reason=\"\")  

   |
|  | ", "created": "2014-12-05T04:53:01Z"}  

  |
| flavor   | m1.tiny (1)

  |
| hostId   | 
7ea1d6ec54fa662ed8b55287dd0acd04dd925292f4895c2443357af9
  

[Yahoo-eng-team] [Bug 1399527] [NEW] Juno:nova error message not matching to relevant to the issue.

2014-12-04 Thread Thalabathy
Public bug reported:

etup:
+
Ubuntu 14.04

Steps to reproduce:


teps to reproduce:

1. create working juno latest setup(single node dev-stack)ubuntu(14.04 server).
2. create network with subnet pool defined only to release 2 IP's.
3. spawn first VM and check VM got the IP.
4. spawn 2 nd VM and it's expected to go ERROR.
5. check the ERROR message and its not matching to the issue, and ERROR message 
should be related VIF creation and not with scheduling. 

sample-ERROR:
+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399527

Title:
  Juno:nova error message not matching to relevant to the issue.

Status in OpenStack Compute (Nova):
  New

Bug description:
  etup:
  +
  Ubuntu 14.04

  Steps to reproduce:
  

  teps to reproduce:

  1. create working juno latest setup(single node dev-stack)ubuntu(14.04 
server).
  2. create network with subnet pool defined only to release 2 IP's.
  3. spawn first VM and check VM got the IP.
  4. spawn 2 nd VM and it's expected to go ERROR.
  5. check the ERROR message and its not matching to the issue, and ERROR 
message should be related VIF creation and not with scheduling. 

  sample-ERROR:
  +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377012] Re: Can't delete an image in deleted status

2014-12-04 Thread nikhil komawar
Unfortunately, we cannot re-delete Image via the API by design.

Another approach for this real issue might be to use the scrubber to
keep track of such leftover data and retry deletes. Please provide your
suggestions here: https://review.openstack.org/#/c/125156/

** Changed in: glance
   Status: Triaged => Won't Fix

** Changed in: glance
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1377012

Title:
  Can't delete an image in deleted status

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  I'm trying to delete an image that has a status of "deleted"

  It's not deleted as I can do an image-show and it returns plus I can
  see it in image_locations and it exists in the backend which for us is
  swift

  glance image-show 17c6077c-99f0-41c7-9bd2-175216330990
  
+---+--+
  | Property  | Value   
 |
  
+---+--+
  | checksum  | 
c9ef771d317595fd3654ca69a4be5f31 |
  | container_format  | bare
 |
  | created_at| 2014-05-22T07:58:23 
 |
  | deleted   | True
 |
  | deleted_at| 2014-05-23T02:16:53 
 |
  | disk_format   | raw 
 |
  | id| 
17c6077c-99f0-41c7-9bd2-175216330990 |
  | is_public | True
 |
  | min_disk  | 10  
 |
  | min_ram   | 0   
 |
  | name  | XX|
  | owner | X |
  | protected | False   
 |
  | size  | 10737418240 
 |
  | status| deleted 
 |
  | updated_at| 2014-05-23T02:16:53 
 |
  
+---+--+

  glance image-delete 17c6077c-99f0-41c7-9bd2-175216330990
  Request returned failure status.
  404 Not Found
  Image 17c6077c-99f0-41c7-9bd2-175216330990 not found.
  (HTTP 404): Unable to delete image 17c6077c-99f0-41c7-9bd2-175216330990

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1377012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1156932] Re: User can't modify security-group-rule via nova-api if there are duplicated security group name

2014-12-04 Thread Rolf Leggewie
raring has seen the end of its life and is no longer receiving any
updates. Marking the raring task for this ticket as "Won't Fix".

** Changed in: python-novaclient (Ubuntu Raring)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1156932

Title:
  User can't modify security-group-rule via nova-api if there are
  duplicated security group name

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Python client library for Nova:
  Triaged
Status in python-novaclient package in Ubuntu:
  Triaged
Status in python-novaclient source package in Raring:
  Won't Fix
Status in python-novaclient source package in Saucy:
  Triaged

Bug description:
  User can't modify security-group-rule via nova-api if there are
  duplicated security group name.

  When quantum security group is enabled in nova,
  nova admin user can't modify security group rule via nova-api.

  nova secgroup-list shows two default security group.
  Both of that has same name "default", so CLI says please specify security 
group id.

  But it looks no way to know security group id from nova-api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1156932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1124384] Re: Configuration reload clears event that others jobs may be waiting on

2014-12-04 Thread Rolf Leggewie
raring has seen the end of its life and is no longer receiving any
updates. Marking the raring task for this ticket as "Won't Fix".

** Changed in: cloud-init (Ubuntu Raring)
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1124384

Title:
  Configuration reload clears event that others jobs may be waiting on

Status in Init scripts for use on cloud images:
  Confirmed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in upstart package in Ubuntu:
  Fix Released
Status in cloud-init source package in Raring:
  Won't Fix
Status in upstart source package in Raring:
  Won't Fix
Status in cloud-init source package in Saucy:
  Fix Released
Status in upstart source package in Saucy:
  Fix Released

Bug description:
  [Impact]

   * The status of blocked events was not preserved, when upstart
     performed stateful re-execution or configuration reload. Thus jobs
     with complex start/stop conditions (one or more "and" clauses),
     with at least one event emitted before re-exec/reload, may not
     execute when remaining conditions are finally satisfied.

   * Above may prevent certain system to function correctly, and in the
     cases similar to cloud-init instances may even cause failure to
     boot.

   * The fix includes incriminating reference counts on blocked events,
     whilst job configuration is reloaded and fully serialising all
     upstart objects, including blocked events, during stateful
     re-execution.

   * Since previous versions of upstart, do not serialise blocked events
     the upgrade needs special casing. On upgrade upstart will perform
     stateful re-execution, unless runlevel 2 has been already
     reached. Instead upstart, will re-executed at system shutdown. This
     should allow upgrading upstart during early boot of cloud-init
     instances. But do note, that old instance of upstart will still be
     running as init and the running machine will still be affected by
     the bug described here.

  [Test Case]

   * Create a sample job /etc/init/foo.conf similar to this:

  start on (event1 and event2)
  task
  exec date

   * Test reload configuration works correctly:

  $ sudo status foo
  foo stop/waiting
  $ sudo initctl emit -n event1
  $ sudo initctl reload-configuration
  $ sudo initctl emit -n event2
  $ sudo tail /var/log/upstart/foo.log

  At the end one should see a timestamp appended in the foo.log.

   * Test stateful re-exec works correctly:

  $ sudo initctl emit -n event1
  $ sudo telinit u
  $ sudo initctl emit -n event2
  $ sudo tail /var/log/upstart/foo.log

   * Start an ubuntu-cloud image (in lxc or cloud) with apt-get update &
  upgrade enabled going from upstart version without this fix included
  to a one that does have it. Cloud-final should finish and boot-
  finished under /var/lib/cloud/instances/*/boot-finished. Please note
  this test should be performed in isolation from dbus security update
  that does partial stateful re-exec at the moment.

  [Regression Potential]

   * The bug fix introduced here is fairly large (approx 1.5k line diff)
     but comes with comprehensive set of test-suites to verify the two
     bug fixes as well as all possible combinations of stateful
     re-execution serialisation formats. Majority of code changes are
     for additional [de]serialisation, which follow existing well tested
     code pattern. And changes to reference counting have been carefully
     reviewed and tested by multiple developers.

   * While the bug report indicates a severe problem, it was not noticed
     until recently, as the system must be under heavy race conditions
     to become affected by this bug. Since systems reaching stable state,
     with little or no blocked events left, would not normally be
     affected.

   * Overall regression potential is deemed low.

  [Original Bug Report]

  Under bug 1080841 we made cloud-init invoke 'initctl reload-
  configuration' after it wrote a upstart job.  This was necessary
  because inotify is not supported on all filesystems (overlayfs being
  the one of most current interst).

  This seems to be causing upstart some pain, and resulting in cloud-
  final (and 'rc') not being run.

  Easy user-data to reproduce the problem is:

  #cloud-config-archive
  - content: |
     #cloud-boothook
     #!/bin/sh
     touch /run/cloud-init-upstart-reload  # hack, see trunk commit 783
  - content: |
     #!/bin/sh
     echo " $(date -R): user-script run ===" | tee /run/user-script.log
  - content: |
     #upstart-job
     description "a test upstart job"
     start on stopped rc RUNLEVEL=[2345]
     console output
     task
     script
     echo " $(date -R): upstart job run ===" | tee /run/upstart-job.log
     end script

  You should (and do on quantal) end up with 2 files written to /run.

  I've verified that the same behavior

[Yahoo-eng-team] [Bug 1228649] Re: noVNC doesn't work when offloaded to port 80 or 443

2014-12-04 Thread Rolf Leggewie
raring has seen the end of its life and is no longer receiving any
updates. Marking the raring task for this ticket as "Won't Fix".

** Changed in: nova (Ubuntu Raring)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1228649

Title:
  noVNC doesn't work when offloaded to port 80 or 443

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) grizzly series:
  Won't Fix
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Raring:
  Won't Fix

Bug description:
  When offloading nova-novnc to port 80 or 443 the javascript code does
  not load the websockets code properly, and the page simply shows
  "Loading" in black text.

  The problem is due to the javascript using `window.location.port`
  which parses the browser's address bar.  This is always an empty
  string when the protocol is http or https.

  The noVNC project addressed this issue in the following patches.

  https://github.com/kanaka/noVNC/pull/245
  https://github.com/kanaka/noVNC/pull/252

  Would like to request a newer nova-novnc be built, or patch the
  existing package with the PR above, and backport to grizzly's UEC
  ppas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1228649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244694] Re: [SRU] Creating snapshot fails due to nonexistent temporary directory

2014-12-04 Thread Rolf Leggewie
saucy has seen the end of its life and is no longer receiving any
updates. Marking the saucy task for this ticket as "Won't Fix".

** Changed in: libvirt (Ubuntu Saucy)
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244694

Title:
  [SRU] Creating snapshot fails due to nonexistent temporary directory

Status in OpenStack Compute (Nova):
  Invalid
Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Saucy:
  Won't Fix

Bug description:
   SRU Justification ---

  [Impact]

  In a libvirt-based OpenStack deployment, Nova fails to snapshot
  instances, failing with error:

  internal error: unable to execute QEMU command 'drive-mirror': Could
  not open
  
'/var/lib/nova/instances/snapshots/tmp5DdrIO/236df3e170e64fabaeb3c7601e2d6c47.delta'

  I had originally discovered this bug using the Tempset test suite
  while verifying an unrelated OpenStack SRU, but other users are
  experiencing this in the wild.

  [Test Case]

  Deploy an OpenStack cloud based on Ubuntu Saucy and OpenStack Havana,
  then attempt to snapshot a running instance.  The Tempest integration
  test suite contains a snapshot test case:
  
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image
  [1]

  [Regression Potential]

  The proposed fix is isolated to the libvirt packaging and simply
  appends an additional directory exception to the packages apparmor
  configuration, so that libvirt has appropriate access to the directory
  used during the process of snapshotting an instance.

  
  
[1]https://github.com/openstack/tempest/blob/master/tempest/api/compute/images/test_images_oneserver.py#L92


  --- Original Bug ---

  
  In some cases (not for all instances, just for some) the following error 
prevents creating the snapshot:

  2013-10-25 14:49:30.724 22980 AUDIT nova.compute.manager 
[req-6e9326d7-64df-40f7-bc81-190ec5234de2 657f1aca48d24eaf9655e0b77b2bc6d9 
35b2b08cc3f44a538cf3535043793a2a] [instance: 
db9c8a72-6ce2-41b7-8f7a-be0be8468667] instance snapshotting
  2013-10-25 14:49:30.944 22980 INFO nova.virt.libvirt.driver 
[req-6e9326d7-64df-40f7-bc81-190ec5234de2 657f1aca48d24eaf9655e0b77b2bc6d9 
35b2b08cc3f44a538cf3535043793a2a] [instance: 
db9c8a72-6ce2-41b7-8f7a-be0be8468667] Beginning live snapshot process
  2013-10-25 14:49:32.006 22980 INFO nova.virt.libvirt.driver 
[req-6e9326d7-64df-40f7-bc81-190ec5234de2 657f1aca48d24eaf9655e0b77b2bc6d9 
35b2b08cc3f44a538cf3535043793a2a] [instance: 
db9c8a72-6ce2-41b7-8f7a-be0be8468667] Snapshot extracted, beginning image upload
  2013-10-25 14:49:32.329 22980 ERROR nova.openstack.common.rpc.amqp 
[req-6e9326d7-64df-40f7-bc81-190ec5234de2 657f1aca48d24eaf9655e0b77b2bc6d9 
35b2b08cc3f44a538cf3535043793a2a] Exception during message handling
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 353, in 
decorated_function
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 90, in wrapped
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp 
payload)
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 73, in wrapped
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 243, in 
decorated_function
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 229, in 
decorated_function
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-25 14:49:32.329 22980 TRACE nova.openstack.comm

[Yahoo-eng-team] [Bug 1236439] Re: switch to use hostnames like nova breaks upgrades of l3-agent

2014-12-04 Thread Rolf Leggewie
saucy has seen the end of its life and is no longer receiving any
updates. Marking the saucy task for this ticket as "Won't Fix".

** Changed in: neutron (Ubuntu Saucy)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1236439

Title:
  switch to use hostnames like nova breaks upgrades of l3-agent

Status in Ubuntu Cloud Archive:
  Triaged
Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in Release Notes for Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Triaged
Status in neutron source package in Saucy:
  Won't Fix

Bug description:
  Commit
  
https://github.com/openstack/neutron/commit/140029ebd006c116ee684890dd70e13b7fc478ec
  switch to using socket.gethostname() for the name of neutron agents;
  this has the unfortunate side effect with the l3-agent that all router
  services are no longer scheduled on an active agent, resulting in
  floating ip and access outages.

  Looks like this will effect upgrades from grizzly->havana as well:

  ubuntu@churel:/etc/maas$ quantum agent-list
  
+--++--+---++
  | id   | agent_type | host
 | alive | admin_state_up |
  
+--++--+---++
  | 02ad1175-209c-4125-889a-e390a15ecd50 | Open vSwitch agent | 
caipora.1ss.qa.lexington | xxx   | True   |
  | 191d4757-05f6-4170-a78d-d6a3c1b9265e | Open vSwitch agent | canaima 
 | :-)   | True   |
  | 306cbfbb-8879-4d64-ac26-db007f9113a9 | DHCP agent | 
cofgod.1ss.qa.lexington  | xxx   | True   |
  | 32081821-1e94-4274-993b-b0bf2714e5ac | Open vSwitch agent | 
ciguapa.1ss.qa.lexington | xxx   | True   |
  | 5697a23a-712e-4de3-a218-2a6c177bf555 | Open vSwitch agent | chakora 
 | :-)   | True   |
  | 5ea5e207-1da0-47e3-9a7e-984589b11300 | Open vSwitch agent | 
cuegle.1ss.qa.lexington  | xxx   | True   |
  | 71e31354-76e7-4640-9a5b-368678bc22d0 | Open vSwitch agent | 
canaima.1ss.qa.lexington | xxx   | True   |
  | 7267e3d2-d9bf-4e57-8d19-803aab636f36 | Open vSwitch agent | 
chakora.1ss.qa.lexington | xxx   | True   |
  | 75ff2563-f5a5-4df3-aa19-fe8310146c10 | Open vSwitch agent | cuegle  
 | :-)   | True   |
  | 875de52e-d6c3-4e82-8cbd-269831ff00bc | Open vSwitch agent | cofgod  
 | :-)   | True   |
  | 9afaf6f2-2756-4863-b5d0-7faba502e878 | L3 agent   | cofgod  
 | :-)   | True   |
  | a81ac370-a318-42e4-9279-eef2b6141644 | Open vSwitch agent | 
cofgod.1ss.qa.lexington  | xxx   | True   |
  | d6e6332e-822a-438e-8613-16013da825e0 | L3 agent   | 
cofgod.1ss.qa.lexington  | xxx   | True   |
  | d9712755-03b3-4326-99c1-3bf66c878dc6 | Open vSwitch agent | ciguapa 
 | :-)   | True   |
  | dadf284c-ac8f-4dc1-9ba4-73182e5f1911 | DHCP agent | cofgod  
 | :-)   | True   |
  | ed07ff1a-dcca-4bbd-b026-1296bb90f89b | Open vSwitch agent | caipora 
 | :-)   | True   |
  
+--++--+---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1236439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1156932] Re: User can't modify security-group-rule via nova-api if there are duplicated security group name

2014-12-04 Thread Rolf Leggewie
saucy has seen the end of its life and is no longer receiving any
updates. Marking the saucy task for this ticket as "Won't Fix".

** Changed in: python-novaclient (Ubuntu Saucy)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1156932

Title:
  User can't modify security-group-rule via nova-api if there are
  duplicated security group name

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Python client library for Nova:
  Triaged
Status in python-novaclient package in Ubuntu:
  Triaged
Status in python-novaclient source package in Raring:
  Won't Fix
Status in python-novaclient source package in Saucy:
  Won't Fix

Bug description:
  User can't modify security-group-rule via nova-api if there are
  duplicated security group name.

  When quantum security group is enabled in nova,
  nova admin user can't modify security group rule via nova-api.

  nova secgroup-list shows two default security group.
  Both of that has same name "default", so CLI says please specify security 
group id.

  But it looks no way to know security group id from nova-api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1156932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399546] [NEW] confusing code snippet when registers all standard API extensions

2014-12-04 Thread hougangliu
Public bug reported:

nova/api/openstack/extensions.py:def load_standard_extensions(


for dirpath, dirnames, filenames in os.walk(our_dir):
# Compute the relative package name from the dirpath
relpath = os.path.relpath(dirpath, our_dir)
if relpath == '.':
relpkg = ''
else:
relpkg = '.%s' % '.'.join(relpath.split(os.sep))

# Now, consider each file in turn, only considering .py files
for fname in filenames:
root, ext = os.path.splitext(fname)

# Skip __init__ and anything that's not .py
if ext != '.py' or root == '__init__':
continue

# Try loading it
classname = "%s%s" % (root[0].upper(), root[1:])
classpath = ("%s%s.%s.%s" %
 (package, relpkg, root, classname))

if ext_list is not None and classname not in ext_list:
logger.debug("Skipping extension: %s" % classpath)
continue

try:
ext_mgr.load_extension(classpath)
except Exception as exc:
logger.warn(_('Failed to load extension %(classpath)s: '
  '%(exc)s'),
{'classpath': classpath, 'exc': exc})

# Now, let's consider any subdirectories we may have...
subdirs = []
for dname in dirnames:
# Skip it if it does not have __init__.py
if not os.path.exists(os.path.join(dirpath, dname, '__init__.py')):
continue

# If it has extension(), delegate...
ext_name = "%s%s.%s.extension" % (package, relpkg, dname)
try:
ext = importutils.import_class(ext_name)
except ImportError:
# extension() doesn't exist on it, so we'll explore
# the directory for ourselves
subdirs.append(dname)
else:
try:
ext(ext_mgr)
except Exception as exc:
logger.warn(_('Failed to load extension %(ext_name)s:'
  '%(exc)s'),
{'ext_name': ext_name, 'exc': exc})

# Update the list of directories we'll explore...
dirnames[:] = subdirsthis is unused,  so we can remove subdirs

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399546

Title:
  confusing code snippet when registers all standard API extensions

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova/api/openstack/extensions.py:def load_standard_extensions(

  
  for dirpath, dirnames, filenames in os.walk(our_dir):
  # Compute the relative package name from the dirpath
  relpath = os.path.relpath(dirpath, our_dir)
  if relpath == '.':
  relpkg = ''
  else:
  relpkg = '.%s' % '.'.join(relpath.split(os.sep))

  # Now, consider each file in turn, only considering .py files
  for fname in filenames:
  root, ext = os.path.splitext(fname)

  # Skip __init__ and anything that's not .py
  if ext != '.py' or root == '__init__':
  continue

  # Try loading it
  classname = "%s%s" % (root[0].upper(), root[1:])
  classpath = ("%s%s.%s.%s" %
   (package, relpkg, root, classname))

  if ext_list is not None and classname not in ext_list:
  logger.debug("Skipping extension: %s" % classpath)
  continue

  try:
  ext_mgr.load_extension(classpath)
  except Exception as exc:
  logger.warn(_('Failed to load extension %(classpath)s: '
'%(exc)s'),
  {'classpath': classpath, 'exc': exc})

  # Now, let's consider any subdirectories we may have...
  subdirs = []
  for dname in dirnames:
  # Skip it if it does not have __init__.py
  if not os.path.exists(os.path.join(dirpath, dname, 
'__init__.py')):
  continue

  # If it has extension(), delegate...
  ext_name = "%s%s.%s.extension" % (package, relpkg, dname)
  try:
  ext = importutils.import_class(ext_name)
  except ImportError:
  # extension() doesn't exist on it, so we'll explore
  # the directory for ourselves
  subdirs.append(dname)
  else:
  try:
  ext(ext_mgr)
  except Exception as exc:
  logger.warn(_('Failed to load extension %(ext_na