[Yahoo-eng-team] [Bug 1469661] [NEW] 'volumes_attached' included the volume_id which is actually available

2015-06-29 Thread Xiang Hui
Public bug reported:

[Env]
Ubuntu 14.04
OpenStack Icehouse


[Descrition]
I am usting pdb to debug nova attach_volume operation, due to the reason that 
the command is timeout, the volume failed to be attached to the instance, 
however, in the nova db, the attachment device is already recorded other than 
fallback, which is completely not right from user's perspective.

For example, nova instance '1' shows os-extended-
volumes:volumes_attached | [{id:
3c8205b9-5066-42ea-9180-601fac50a08e}, {id:
3c8205b9-5066-42ea-9180-601fac50a08e}, {id:
3c8205b9-5066-42ea-9180-601fac50a08e}, {id:
3c8205b9-5066-42ea-9180-601fac50a08e}] |  even if the volume
3c8205b9-5066-42ea-9180-601fac50a08e is actually available.

I am concerning there are some situations nova attach_volume would fail
in the middle procedure which would have this issue as well, is it
better to delay the db persistent step after the device is really being
attached?

ubuntu@xianghui-bastion:~/openstack-charm-testing/test$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| d58a3b25-0434-4b92-a3a8-8b4188c611c3 | 1| ACTIVE | -  | Running   
  | private=192.168.21.4 |
+--+--+++-+--+
ubuntu@xianghui-bastion:~/openstack-charm-testing/test$ nova volume-list
+--+---+--+--+-+-+
| ID   | Status| Display Name | Size | 
Volume Type | Attached to |
+--+---+--+--+-+-+
| 3c8205b9-5066-42ea-9180-601fac50a08e | available | test | 2| None 
   | |
+--+---+--+--+-+-+
ubuntu@xianghui-bastion:~/openstack-charm-testing/test$ nova show 1
+--+---
---+
| Property | Value  
   

   |
+--+---
---+
| OS-DCF:diskConfig| MANUAL 
   

   |
| OS-EXT-AZ:availability_zone  | nova   
   

   |
| OS-EXT-SRV-ATTR:host | juju-xianghui-machine-12   
   

   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | 
juju-xianghui-machine-12.openstacklocal 
  

   |
| OS-EXT-SRV-ATTR:instance_name| instance-0002  
   

   |
| OS-EXT-STS:power_state   | 1  
   

   |
| OS-EXT-STS:task_state| -  
   

   |
| OS-EXT-STS:vm_state  | active 
   

   |
| OS-SRV-USG:launched_at   | 2015-06-29T05:22:09.00 
   
  

[Yahoo-eng-team] [Bug 1469655] [NEW] VMware: Instance creation fails using block device mapping

2015-06-29 Thread Chinmaya Bharadwaj
Public bug reported:

2015-06-29 14:24:49.211 DEBUG oslo_vmware.exceptions [-] Fault 
InvalidDatastorePath not matched. from (pid=21558) get_fault_class 
/usr/local/lib/python2.7/dist-packages/oslo_vmware/exceptions.py:250
2015-06-29 14:24:49.212 ERROR oslo_vmware.common.loopingcall [-] in fixed 
duration looping call
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall Traceback (most 
recent call last):
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall   File 
/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py, 
line 76, in _inner
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall   File 
/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py, line 417, in 
_poll_task
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall raise task_ex
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall 
VMwareDriverException: Invalid datastore path '[localdatastore] 
volume-c279ad39-f1f9-4861-9d00-2de8f6df7756/volume-c279ad39-f1f9-4861-9d00-2de8f6df7756.vmdk'.
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall
2015-06-29 14:24:49.212 ERROR nova.compute.manager [-] [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] Instance failed to spawn
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] Traceback (most recent call last):
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/compute/manager.py, line 2442, in _build_resources
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] yield resources
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/compute/manager.py, line 2314, in _build_and_run_instance
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] block_device_info=block_device_info)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 480, in spawn
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] admin_password, network_info, 
block_device_info)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 628, in spawn
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] instance, adapter_type)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 371, in attach_volume
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] 
self._attach_volume_vmdk(connection_info, instance, adapter_type)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 330, in 
_attach_volume_vmdk
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] vmdk_path=vmdk.path)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 71, in 
attach_disk_to_vm
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] vm_util.reconfigure_vm(self._session, 
vm_ref, vmdk_attach_config_spec)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/virt/vmwareapi/vm_util.py, line 1377, in reconfigure_vm
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] session._wait_for_task(reconfig_task)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 680, in _wait_for_task
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] return self.wait_for_task(task_ref)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py, line 380, in 
wait_for_task
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215] return evt.wait()
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 
0a55fe16-3a21-40f0-85f3-777d6254f215]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 121, in wait
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1468698] Re: Image-update api returns 500 while passing --min-ram and --min-disk greater than 2^(31) max value

2015-06-29 Thread Mike Fedosin
** This bug is no longer a duplicate of bug 1460060
   Glance v1 and v2 api returns 500 while passing --min-ram and --min-disk 
greater than 2^(31) max value

** Changed in: glance
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1468698

Title:
  Image-update api returns 500 while passing --min-ram and --min-disk
  greater than 2^(31) max value

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed

Bug description:
  $ glance image-update b3886698-04c3-4621-9a04-4a587d3288d1 --min-ram 
234578
  HTTPInternalServerError (HTTP 500)

  $ glance image-update b3886698-04c3-4621-9a04-4a587d3288d1 --min-disk 
234578
  HTTPInternalServerError (HTTP 500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1468698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469734] [NEW] neutron.tests.functional.test_tools.TestSafeFixture.test_error_after_root_setup fails due to new fixture 1.3.0 release

2015-06-29 Thread Jakub Libosvar
Public bug reported:

Today, June 29th was fixtures 1.3.0 released that introduced commit
https://github.com/testing-
cabal/fixtures/commit/354acf568aa86bb7d43a01c23d73c750f601b335 that
causes our new SafeFixture to fail test from Summary.

ft1.158: 
neutron.tests.functional.test_tools.TestSafeFixture.test_error_after_root_setup(testtools
 useFixture)_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

traceback-1: {{{
Traceback (most recent call last):
  File neutron/tests/functional/test_tools.py, line 73, in 
test_error_after_root_setup
self.assertRaises(ValueError, self.parent.useFixture, fixture)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
self.assertThat(our_callable, matcher)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 483, in _matchHelper
mismatch = matcher.match(matchee)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
mismatch = matcher.match(matchee)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 414, in match
reraise(*matchee)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
result = matchee()
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 969, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 670, in useFixture
gather_details(fixture.getDetails(), self.getDetails())
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/fixture.py,
 line 169, in getDetails
result = dict(self._details)
TypeError: 'NoneType' object is not iterable
}}}

Traceback (most recent call last):
  File neutron/tests/tools.py, line 33, in lambda
self.setUp = lambda: self.safe_setUp(unsafe_setup)
  File neutron/tests/tools.py, line 46, in safe_setUp
self.safe_cleanUp()
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File neutron/tests/tools.py, line 43, in safe_setUp
unsafe_setup()
  File neutron/tests/functional/test_tools.py, line 43, in setUp
raise ValueError
ValueError

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469734

Title:
  
neutron.tests.functional.test_tools.TestSafeFixture.test_error_after_root_setup
  fails due to new fixture 1.3.0 release

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Today, June 29th was fixtures 1.3.0 released that introduced commit
  https://github.com/testing-
  cabal/fixtures/commit/354acf568aa86bb7d43a01c23d73c750f601b335 that
  causes our new SafeFixture to fail test from Summary.

  ft1.158: 
neutron.tests.functional.test_tools.TestSafeFixture.test_error_after_root_setup(testtools
 useFixture)_StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  traceback-1: {{{
  Traceback (most recent call last):
File neutron/tests/functional/test_tools.py, line 73, in 
test_error_after_root_setup
  self.assertRaises(ValueError, self.parent.useFixture, fixture)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 483, in _matchHelper
  mismatch = matcher.match(matchee)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
   

[Yahoo-eng-team] [Bug 1469668] [NEW] Add a neutron port-scheduler component

2015-06-29 Thread Kevin Benton
Public bug reported:

Both the get-me-a-network spec and the use cases laid out by several
large deployers (Neutron networks limited to a single rack or other
subset of datacenter) could benefit from a port scheduler. This port
scheduler would allow Nova (or another caller) to request a port via
port create without providing a network ID. The port scheduler would
then populate the appropriate network ID depending on the request
details and port creation would continue as normal.

In lieu of the network ID, the client can pass optional hints to
constrain the network selection (e.g. an external network that the
network can reach). If the client doesn't pass any hints, this would
become the 'get-me-a-network' use case where it's entirely up to
Neutron.

In order to satisfy the use case where not all Neutron networks are
available everywhere, this scheduler should also expose an API that
allows a Nova scheduling filter to be written that can ask Neutron which
hosts can be used for the Neutron port details it was given.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469668

Title:
  Add a neutron port-scheduler component

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Both the get-me-a-network spec and the use cases laid out by several
  large deployers (Neutron networks limited to a single rack or other
  subset of datacenter) could benefit from a port scheduler. This port
  scheduler would allow Nova (or another caller) to request a port via
  port create without providing a network ID. The port scheduler would
  then populate the appropriate network ID depending on the request
  details and port creation would continue as normal.

  In lieu of the network ID, the client can pass optional hints to
  constrain the network selection (e.g. an external network that the
  network can reach). If the client doesn't pass any hints, this would
  become the 'get-me-a-network' use case where it's entirely up to
  Neutron.

  In order to satisfy the use case where not all Neutron networks are
  available everywhere, this scheduler should also expose an API that
  allows a Nova scheduling filter to be written that can ask Neutron
  which hosts can be used for the Neutron port details it was given.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469651] [NEW] Boolean values displayed without 'yesno' filter in user and project detail page

2015-06-29 Thread Masco Kaliyamoorthy
Public bug reported:

In Project and User detail page, enabled field is boolean.
This is displayed as it is (True and False)
It should displayed as Yes and No

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1469651

Title:
  Boolean values displayed without 'yesno' filter in user and project
  detail page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Project and User detail page, enabled field is boolean.
  This is displayed as it is (True and False)
  It should displayed as Yes and No

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1469651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456192] Re: Extend glance-manage with new version command

2015-06-29 Thread Kamil Rykowski
As Stuart noted on the review - we already provide such functionality by
using:

glance-manage --version

We don't need to provide it as a separate command.

** Changed in: glance
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1456192

Title:
  Extend glance-manage with new version command

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Currently there is no way to check what version of Glance is installed
  using the CLI. It would be great if we can add this information to the
  current glance-manage command (just like Cinder or Nova for example).

  Something like:

  $ glance-manage version
  2015.2.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1456192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468395] Re: Versions of oslo.i18n higher than 1.17.0 cause ImportError

2015-06-29 Thread Doug Hellmann
** Changed in: oslo.i18n
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1468395

Title:
  Versions of oslo.i18n higher than 1.17.0 cause ImportError

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in Oslo Internationalization Library:
  Invalid
Status in Python client library for Keystone:
  Confirmed

Bug description:
  oslo.i18n version 2.0.0 was release 24h ago, and it breaks Horizon.

  If I pin the version of oslo.i18n to 2.0.0, it works.

  Here's the trace from tests script:

  ➜  ~  git clone g...@github.com:openstack/horizon.git
  Cloning into 'horizon'...
  remote: Counting objects: 97245, done.
  remote: Compressing objects: 100% (9/9), done.
  remote: Total 97245 (delta 3), reused 0 (delta 0), pack-reused 97236
  Receiving objects: 100% (97245/97245), 147.90 MiB | 4.73 MiB/s, done.
  Resolving deltas: 100% (62610/62610), done.
  Checking connectivity... done.
  ➜  ~  cd horizon 
  ➜  horizon git:(master) git checkout -t origin/stable/juno
  Branch stable/juno set up to track remote branch stable/juno from origin.
  Switched to a new branch 'stable/juno'
  ➜  horizon git:(stable/juno) ./run_tests.sh 
  Checking environment.
  Environment not found. Install? (Y/n) Y
  Fetching new src packages...
  Creating venv... done.
  Installing dependencies with pip (this can take a while)...
  You are using pip version 7.0.1, however version 7.0.3 is available.
  You should consider upgrading via the 'pip install --upgrade pip' command.
  DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
  Collecting pip=1.4
Using cached pip-7.0.3-py2.py3-none-any.whl
  Installing collected packages: pip
Found existing installation: pip 7.0.1
  Uninstalling pip-7.0.1:
Successfully uninstalled pip-7.0.1
  Successfully installed pip-7.0.3
  DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
  Collecting setuptools
Using cached setuptools-18.0-py2.py3-none-any.whl
  Installing collected packages: setuptools
Found existing installation: setuptools 16.0
  Uninstalling setuptools-16.0:
Successfully uninstalled setuptools-16.0
  Successfully installed setuptools-18.0
  DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
  Collecting pbr
Using cached pbr-1.2.0-py2.py3-none-any.whl
  Installing collected packages: pbr
  Successfully installed pbr-1.2.0
  DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
  Collecting pbr!=0.7,1.0,=0.6 (from -r /home/javier/horizon/requirements.txt 
(line 1))
Using cached pbr-0.11.0-py2.py3-none-any.whl
  Collecting Django1.7,=1.4.2 (from -r /home/javier/horizon/requirements.txt 
(line 2))
Using cached Django-1.6.11-py2.py3-none-any.whl
  Collecting django-compressor=1.4,=1.4 (from -r 
/home/javier/horizon/requirements.txt (line 3))
Using cached django_compressor-1.4-py2.py3-none-any.whl
  Collecting django-openstack-auth!=1.1.8,=1.1.9,=1.1.7 (from -r 
/home/javier/horizon/requirements.txt (line 4))
Using cached django_openstack_auth-1.1.9-py2-none-any.whl
  Collecting django-pyscss=1.0.6,=1.0.3 (from -r 
/home/javier/horizon/requirements.txt (line 5))
  Collecting eventlet=0.15.2,=0.15.1 (from -r 
/home/javier/horizon/requirements.txt (line 6))
Using cached eventlet-0.15.2-py2.py3-none-any.whl
  Collecting httplib2=0.9,=0.7.5 (from -r 
/home/javier/horizon/requirements.txt (line 7))
  Collecting iso8601=0.1.10,=0.1.9 (from -r 
/home/javier/horizon/requirements.txt (line 8))
  Collecting kombu=3.0.7,=2.5.0 (from -r 
/home/javier/horizon/requirements.txt (line 9))
  Collecting lockfile=0.8,=0.8 (from -r /home/javier/horizon/requirements.txt 
(line 10))
  Collecting netaddr=0.7.13,=0.7.12 (from -r 
/home/javier/horizon/requirements.txt (line 11))
Using cached netaddr-0.7.13-py2.py3-none-any.whl
  Collecting pyScss1.3,=1.2.1 (from -r /home/javier/horizon/requirements.txt 
(line 12))
  Collecting python-ceilometerclient1.0.13,=1.0.6 (from -r 
/home/javier/horizon/requirements.txt (line 13))
Using cached python_ceilometerclient-1.0.12-py2.py3-none-any.whl
  Collecting python-cinderclient=1.1.1,=1.1.0 (from -r 
/home/javier/horizon/requirements.txt (line 14))
Using cached python_cinderclient-1.1.1-py2.py3-none-any.whl
  Collecting python-glanceclient=0.15.0,=0.14.0 (from -r 
/home/javier/horizon/requirements.txt (line 15))
Using cached python_glanceclient-0.15.0-py2.py3-none-any.whl
  Collecting python-heatclient0.3.0,=0.2.9 (from -r 
/home/javier/horizon/requirements.txt (line 16))
Using cached 

[Yahoo-eng-team] [Bug 1464461] Re: delete action always cause error ( in kilo)

2015-06-29 Thread Jeremy Stanley
Can someone provide a justification explaining why this bug was switched
to a suspected security vulnerability report? What is the exploit
scenario and associated risk?

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464461

Title:
  delete action always cause error ( in kilo)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  When i did any delete actions (delete router, delete network etc...)
  in japanese environment , always get a error page.

  horizon error logs:
  -
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 52, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
89, in dispatch
  return handler(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 223, 
in post
  return self.get(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 159, 
in get
  handled = self.construct_tables()
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 150, 
in construct_tables
  handled = self.handle_table(table)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 125, 
in handle_table
  handled = self._tables[name].maybe_handle()
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1640, 
in maybe_handle
  return self.take_action(action_name, obj_id)
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1482, 
in take_action
  response = action.multiple(self, self.request, obj_ids)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
302, in multiple
  return self.handle(data_table, request, object_ids)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
828, in handle
  exceptions.handle(request, ignore=ignore)
File /usr/lib/python2.7/site-packages/horizon/exceptions.py, line 364, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
817, in handle
  (self._get_action_name(past=True), datum_display))
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 0: 
ordinal not in range(128)
  -

  It occurs in japanese,korean,chinese,french and deutsche, not occurs
  in english and spanish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469749] [NEW] RamFilter logging partially considers ram-allocation-ratio

2015-06-29 Thread Alvaro Uría
Public bug reported:

Package: nova-scheduler
Version: 1:2014.1.4-0ubuntu2.1

RamFilter accurately skips a host because RAM resource is not enough for
requested VM. However, I think log should be more explicit on numbers,
taking into account ram-allocation-ratio can be different from 1.0.

Log excerpt:
2015-06-29 12:04:21.422 15708 DEBUG nova.scheduler.filters.ram_filter 
[req-d14d9f04-c2b1-42be-b5b9-669318bb0030 3cca8ee6898e42f287adbd4f5dac1801 
a0ae7f82f577413ab0d73f3dc09fb906] (hostname, hostname.tld) ram:10148 
disk:264192 io_ops:0 instances:39 does not have 2048 MB usable ram, it only has 
480.4 MB usable ram. host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/ram_filter.py:60

On log above, RAM says 10148 (MB), which seems enough for a 2048MB VM.
First number (10148) is calculated as: TotalMB - UsedMB. Additional
(real) number should be: TotalMB * RamAllocRatio - UsedMB.

In this case, ram-allocatioin-ratio is 0.9, which results in 480.4MB.

Please let me know if you'd need more details.

Cheers,
-Alvaro.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469749

Title:
  RamFilter logging partially considers ram-allocation-ratio

Status in OpenStack Compute (Nova):
  New

Bug description:
  Package: nova-scheduler
  Version: 1:2014.1.4-0ubuntu2.1

  RamFilter accurately skips a host because RAM resource is not enough
  for requested VM. However, I think log should be more explicit on
  numbers, taking into account ram-allocation-ratio can be different
  from 1.0.

  Log excerpt:
  2015-06-29 12:04:21.422 15708 DEBUG nova.scheduler.filters.ram_filter 
[req-d14d9f04-c2b1-42be-b5b9-669318bb0030 3cca8ee6898e42f287adbd4f5dac1801 
a0ae7f82f577413ab0d73f3dc09fb906] (hostname, hostname.tld) ram:10148 
disk:264192 io_ops:0 instances:39 does not have 2048 MB usable ram, it only has 
480.4 MB usable ram. host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/ram_filter.py:60

  On log above, RAM says 10148 (MB), which seems enough for a 2048MB VM.
  First number (10148) is calculated as: TotalMB - UsedMB. Additional
  (real) number should be: TotalMB * RamAllocRatio - UsedMB.

  In this case, ram-allocatioin-ratio is 0.9, which results in 480.4MB.

  Please let me know if you'd need more details.

  Cheers,
  -Alvaro.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1469749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461728] Re: V2.0 API not calling defined external auth

2015-06-29 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461728

Title:
  V2.0 API not calling defined external auth

Status in OpenStack Identity (Keystone):
  Won't Fix
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  When keystone.conf is defined with external auth , all V2.0 API calls
  do not get intercepted by the defined external auth.

  this is my keystone.conf
  [auth]
  methods=password,token,external
  external=keystone.auth.plugins.idm_external.IDMDefaultDomain

  V.20 CURL to initiate external auth.
  curl -X POST -d '{auth:{}}' -H Content-type: application/json -H 
REMOTE_USER: admin http://localhost:5000/v2.0/tokens

  What I'm seeing is the call gets to the keystone/token/controller.py,
  where it checks for the auth{} and executes the external
  authentication

  if token in auth:
  # Try to authenticate using a token
  auth_info = self._authenticate_token(
  context, auth)
  else:
  # Try external authentication
  try:
  auth_info = self._authenticate_external(
  context, auth)
  except ExternalAuthNotApplicable:
  # Try local authentication
  auth_info = self._authenticate_local(
  context, auth)

  ...
     def _authenticate_external(self, context, auth):
  Try to authenticate an external user via REMOTE_USER variable.

  Returns auth_token_data, (user_ref, tenant_ref, metadata_ref)
  
  if 'REMOTE_USER' not in context.get('environment', {}):
  raise ExternalAuthNotApplicable()

  #NOTE(jamielennox): xml and json differ and get confused about what
  # empty auth should look like so just reset it.
  if not auth:
  auth = {}

  username = context['environment']['REMOTE_USER']
  try:
  user_ref = self.identity_api.get_user_by_name(
  username, CONF.identity.default_domain_id)
  user_id = user_ref['id']
  except exception.UserNotFound as e:
  raise exception.Unauthorized(e)

  metadata_ref = {}
  tenant_id = self._get_project_id_from_auth(auth)
  tenant_ref, metadata_ref['roles'] = self._get_project_roles_and_ref(
  user_id, tenant_id)

  expiry = core.default_expire_time()
  bind = None
  if ('kerberos' in CONF.token.bind and
  context['environment'].
  get('AUTH_TYPE', '').lower() == 'negotiate'):
  bind = {'kerberos': username}

  return (user_ref, tenant_ref, metadata_ref, expiry, bind)

  The  _authenticate_external should not assume and have its own
  REMOTE_USER implementation,  instead it should look for the external
  method defined in keystone.conf and appropriately call the defined
  external class.

  The V3 call works fine and calls the right external class defined.
  curl -X POST -d 
'{auth:{identity:{methods:[external],external:{'  -H 
REMOTE_USER:admin -H Content-type: application/json 
http://localhost:5000/v3/auth/tokens

  This is potentially a security hole as well, which will allow all V2
  API's to get Keystone token w/o password.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469754] [NEW] tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes fails with SSHTimeout

2015-06-29 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/23/193223/3/check/check-tempest-dsvm-
nova-v21-full/5f73178/console.html.gz#_2015-06-26_23_49_24_797

2015-06-26 23:49:24.797 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes[id-ab836c29-737b-4101-9fb9-87045eaf89e9]
2015-06-26 23:49:24.797 | 

2015-06-26 23:49:24.797 | 
2015-06-26 23:49:24.798 | Captured traceback:
2015-06-26 23:49:24.798 | ~~~
2015-06-26 23:49:24.798 | Traceback (most recent call last):
2015-06-26 23:49:24.798 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 338, in 
test_compute_with_volumes
2015-06-26 23:49:24.798 | wait.state_wait(_part_state, 'INCREASE')
2015-06-26 23:49:24.798 |   File tempest/thirdparty/boto/utils/wait.py, 
line 36, in state_wait
2015-06-26 23:49:24.798 | old_status = status = lfunction()
2015-06-26 23:49:24.799 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 330, in _part_state
2015-06-26 23:49:24.799 | current = ssh.get_partitions().split('\n')
2015-06-26 23:49:24.799 |   File 
tempest/common/utils/linux/remote_client.py, line 82, in get_partitions
2015-06-26 23:49:24.799 | output = self.exec_command(command)
2015-06-26 23:49:24.799 |   File 
tempest/common/utils/linux/remote_client.py, line 56, in exec_command
2015-06-26 23:49:24.799 | return self.ssh_client.exec_command(cmd)
2015-06-26 23:49:24.800 |   File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
 line 111, in exec_command
2015-06-26 23:49:24.800 | ssh = self._get_ssh_connection()
2015-06-26 23:49:24.800 |   File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
 line 87, in _get_ssh_connection
2015-06-26 23:49:24.800 | password=self.password)
2015-06-26 23:49:24.800 | tempest_lib.exceptions.SSHTimeout: Connection to 
the 172.24.5.1 via SSH timed out.
2015-06-26 23:49:24.800 | User: cirros, Password: None

This is a job with nova-network and in tracing through the calls we
definitely need some more debug logging in this path to see that we have
security groups associated with the instance to refresh the sg rules.

nw_info and instance uuid for the instance:

2015-06-26 23:49:24.883 | === network info ===
2015-06-26 23:49:24.883 | if-info: lo,up,127.0.0.1,8,::1
2015-06-26 23:49:24.883 | if-info: 
eth0,up,10.1.0.2,20,fe80::f816:3eff:fefd:78bd
2015-06-26 23:49:24.883 | ip-route:default via 10.1.0.1 dev eth0 
2015-06-26 23:49:24.883 | ip-route:10.1.0.0/20 dev eth0  src 10.1.0.2 
2015-06-26 23:49:24.883 | === datasource: configdrive local ===
2015-06-26 23:49:24.884 | instance-id: 7377fb75-089e-4a7f-aa13-42073ca4d981
2015-06-26 23:49:24.884 | name: Server 7377fb75-089e-4a7f-aa13-42073ca4d981
2015-06-26 23:49:24.884 | availability-zone: test_az-1643222956
2015-06-26 23:49:24.884 | local-hostname: 
server-7377fb75-089e-4a7f-aa13-42073ca4d981.novalocal
2015-06-26 23:49:24.884 | launch-index: 0

We see that the fixed IP is associated with the instance here:

http://logs.openstack.org/23/193223/3/check/check-tempest-dsvm-
nova-v21-full/5f73178/logs/screen-n-net.txt.gz#_2015-06-26_23_41_39_500

The request ID is req-3b037057-5783-4160-a320-248eb4f2e724.

We see it refresh security groups and there are several messages about
skipping duplicate iptables rule additions:

2015-06-26 23:41:39.685 DEBUG nova.network.linux_net [req-
3b037057-5783-4160-a320-248eb4f2e724 InstanceRunTest-1951055199
InstanceRunTest-702105106] Skipping duplicate iptables rule addition.
[0:0] -A nova-network-snat -s 10.1.0.0/20 -d 0.0.0.0/0 -j SNAT --to-
source 10.0.4.130 -o br100 already in [[0:0] -A PREROUTING -j nova-
network-PREROUTING, [0:0] -A OUTPUT -j nova-network-OUTPUT, [0:0] -A
POSTROUTING -j nova-network-POSTROUTING, [0:0] -A POSTROUTING -j nova-
postrouting-bottom, [0:0] -A nova-postrouting-bottom -j nova-network-
snat, [0:0] -A nova-network-snat -j nova-network-float-snat, [0:0] -A
nova-network-PREROUTING -s 0.0.0.0/0 -d 169.254.169.254/32 -p tcp -m tcp
--dport 80 -j DNAT --to-destination 10.0.4.130:8775, [0:0] -A nova-
network-snat -s 10.1.0.0/20 -d 0.0.0.0/0 -j SNAT --to-source 10.0.4.130
-o br100, [0:0] -A nova-network-POSTROUTING -s 10.1.0.0/20 -d
10.0.4.130/32 -j ACCEPT, [0:0] -A nova-network-POSTROUTING -s
10.1.0.0/20 -d 10.1.0.0/20 -m conntrack ! --ctstate DNAT -j ACCEPT]
add_rule /opt/stack/new/nova/nova/network/linux_net.py:285

2015-06-26 23:41:39.685 DEBUG nova.network.linux_net [req-
3b037057-5783-4160-a320-248eb4f2e724 InstanceRunTest-1951055199
InstanceRunTest-702105106] Skipping apply due to lack of new rules apply
/opt/stack/new/nova/nova/network/linux_net.py:444

I'm sure this is probably a duplicate, maybe of bug 1355573, 

[Yahoo-eng-team] [Bug 1395122] Re: ML2 Cisco Nexus MD: Fail Cfg VLAN when none exists

2015-06-29 Thread Carol Bouchard
** Changed in: neutron
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1395122

Title:
  ML2 Cisco Nexus MD: Fail Cfg VLAN when none exists

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  This is the fix due to a regression as a result of committing bug #
  1330597.  Bug #1330597 expected an error returned when the CLI
  'switchport trunk allowed vlan add'  is applied.  It seems though that
  not all Nexus switches will return an error.  The fix is to perform a
  'get interface' to determine if 'switchport trunk allowed vlan'
  already exists.  It it does, then use the 'add' keyword to 'switchport
  trunk allowed vlan' otherwise leave it out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1395122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468828] Re: HA router-create breaks ML2 drivers that implement create_network such as Arista

2015-06-29 Thread Kyle Mestery
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Importance: Undecided = Medium

** Changed in: neutron/kilo
   Status: New = In Progress

** Changed in: neutron/kilo
 Assignee: (unassigned) = Sukhdev Kapur (sukhdev-8)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468828

Title:
  HA router-create  breaks ML2 drivers that implement create_network
  such as Arista

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  In Progress

Bug description:
  This issue was discovered with Arista ML2 driver, when an HA router
  was created. However, this will impact any ML2 driver that implements
  create_network.

  When an admin creates HA router (neutron router-create --ha ), the HA 
framework invokes network_create() and sets tenant-id to '' (The empty string).
  network_create() ML2 mech driver API expects tenant-id to be set to a valid 
ID.
  Any ML2 driver, which relies on tenant-id, will fail/reject network_create() 
request, resulting in router-create to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469817] [NEW] Glance doesn't handle exceptions from glance_store

2015-06-29 Thread Mike Fedosin
Public bug reported:

Server API expects to catch exception declared at
glance/common/exception.py, but actually risen exceptions have the same
names but are declared at different module, glance_store/exceptions.py
and thus are never caught.

For example, If exception is raised here:
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/_drivers/rbd.py#L316
, it will never be caught here
https://github.com/openstack/glance/blob/stable/kilo/glance/api/v1/images.py#L1107
, because first one is instance of
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/exceptions.py#L198
, but Glance waits for
https://github.com/openstack/glance/blob/stable/kilo/glance/common/exception.py#L293

There are many cases of that issue. The investigation continues.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1469817

Title:
  Glance doesn't handle exceptions from glance_store

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Server API expects to catch exception declared at
  glance/common/exception.py, but actually risen exceptions have the
  same names but are declared at different module,
  glance_store/exceptions.py and thus are never caught.

  For example, If exception is raised here:
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/_drivers/rbd.py#L316
  , it will never be caught here
  
https://github.com/openstack/glance/blob/stable/kilo/glance/api/v1/images.py#L1107
  , because first one is instance of
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/exceptions.py#L198
  , but Glance waits for
  
https://github.com/openstack/glance/blob/stable/kilo/glance/common/exception.py#L293

  There are many cases of that issue. The investigation continues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1469817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430681] Re: object value errors do not all indicate which field was involved

2015-06-29 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.versionedobjects
   Status: Fix Committed = Fix Released

** Changed in: oslo.versionedobjects
Milestone: None = 0.5.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430681

Title:
  object value errors do not all indicate which field was involved

Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo Versioned Objects:
  Fix Released

Bug description:
  When a value is assigned to an object field it is type checked in the
  coerce() method for the field and a ValueError exception is raised if
  it is not of the appropriate type. The name of the field involved in
  the check is known in the coerce() method, but in most cases it is not
  mentioned in the error message. When constructing an object with a
  list of field values it is hard to know which one caused the error.
  Adding the name of the field that generated the error would help.

  For example, this test:

  def test_my_dummy_test(self):
  i = instance.Instance(uuid='my id', system_metadata='metadata')

  This would this would result in a ValueError exception as follows:

  Traceback (most recent call last):
File /home/ptm/code/nova/nova/tests/unit/objects/test_instance.py, line 
1514, in test_my_dummy_test
  i = instance.Instance(uuid='my id', system_metadata='metadata')
File /home/ptm/code/nova/nova/objects/instance.py, line 270, in __init__
  super(Instance, self).__init__(*args, **kwargs)
File /home/ptm/code/nova/nova/objects/base.py, line 282, in __init__
  setattr(self, key, kwargs[key])
File /home/ptm/code/nova/nova/objects/base.py, line 77, in setter
  field_value = field.coerce(self, name, value)
File /home/ptm/code/nova/nova/objects/fields.py, line 191, in coerce
  return self._type.coerce(obj, attr, value)
File /home/ptm/code/nova/nova/objects/fields.py, line 433, in coerce
  raise ValueError(_('A dict is required in field %s') % attr)
  ValueError: A dict is required here

  This does not give any clue which of the two values supplied is
  incorrect. Adding the field name to error message could give an error
  like this:

  ValueError: A dict is required in field system_metadata

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469817] Re: Glance doesn't handle exceptions from glance_store

2015-06-29 Thread Ian Cordasco
** Changed in: glance
   Importance: Undecided = High

** Changed in: glance
   Status: New = Confirmed

** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Also affects: glance/liberty
   Importance: High
   Status: Confirmed

** Changed in: glance/kilo
   Importance: Undecided = High

** Changed in: glance/juno
   Importance: Undecided = High

** Changed in: glance/kilo
   Status: New = Confirmed

** Changed in: glance/juno
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1469817

Title:
  Glance doesn't handle exceptions from glance_store

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in Glance juno series:
  Confirmed
Status in Glance kilo series:
  Confirmed
Status in Glance liberty series:
  Confirmed

Bug description:
  Server API expects to catch exception declared at
  glance/common/exception.py, but actually risen exceptions have the
  same names but are declared at different module,
  glance_store/exceptions.py and thus are never caught.

  For example, If exception is raised here:
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/_drivers/rbd.py#L316
  , it will never be caught here
  
https://github.com/openstack/glance/blob/stable/kilo/glance/api/v1/images.py#L1107
  , because first one is instance of
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/exceptions.py#L198
  , but Glance waits for
  
https://github.com/openstack/glance/blob/stable/kilo/glance/common/exception.py#L293

  There are many cases of that issue. The investigation continues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1469817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466900] Re: get_base_properties in v2/images.py missing 'shared' image visibility

2015-06-29 Thread Fei Long Wang
'shared' is a valid value for the visibility *filter*, but it is not a
value that's used to populate the 'visibility' field of any image. It
will be fixed by this blueprint
https://blueprints.launchpad.net/glance/+spec/community-level-v2-image-
sharing

** Changed in: glance
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1466900

Title:
  get_base_properties in v2/images.py missing 'shared' image visibility

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L830

'visibility': {
  'type': 'string',
  'description': _('Scope of image accessibility'),
  'enum': ['public', 'private'],
  },

  
  Should include 'shared' in the list of visibility options

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1466900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469871] [NEW] OVS Neutron Agent support for ovs+dpdk netdev datapath

2015-06-29 Thread sean mooney
Public bug reported:

The OVS Neuton Agent currently supports managing 2 datapaths. 
the linux kernel data path and the newly added openvswitch windows datapath.

Based on feedback from the summit this whishlist bug has been created in
place of a blueprint to  capture the changes required to enable  the ovs
l2 agent to managed the userspace netdev datapath.

2 new config should be added to allow configuation of ovs and the ovs l2
agent.

cfg.StrOpt('ovs_datapath', default='system', choices=['system','netdev'],
   help=_(ovs datapath to use.)),

and

cfg.StrOpt('agent_type', default=q_const.AGENT_TYPE_OVS,
   choices=[q_const.AGENT_TYPE_OVS, 
q_const.AGENT_TYPE_OVS_DPDK],
   help=_(Selects the Agent Type 
reported))

the ovs_datapath config option will provided a mechanism at deploy time to 
select which datapath to enable.
the 'system'(kernel) datapath will be enabled by default as it is today. the 
netdev(userspace) datapath option will enabled the ovs agent to configure and 
manage the netdev data path. this config option will be added to the ovs 
section of the ml2_conf.ini

the agent_type config option will provided a mechanism to enable coexistence of 
dpdk enabled ovs nodes  and vanilla ovs nodes.
by allowing a configurable agent_type both the standard openvswitch ml2 
mechanism driver and the ovsdpdk mechanism driver can be used. by default the 
agent_type reported will be unchanged 'Open vSwitch agent'. during deployment 
an operator can chose to spcify an agent_type of 'DPDK OVS Agent' if they have 
deployed a dpdk enabled.

these are the only changes required to extent the ovs agent to suport
the netdev datapath.

documentation and unit tests will be provided to cover these changes.
a new job can be added to the intel-networking-ci to continue to validate this 
configuration if additional 3rd party 
testing is desired.

** Affects: neutron
 Importance: Undecided
 Assignee: sean mooney (sean-k-mooney)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = sean mooney (sean-k-mooney)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469871

Title:
  OVS Neutron Agent support for ovs+dpdk netdev datapath

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The OVS Neuton Agent currently supports managing 2 datapaths. 
  the linux kernel data path and the newly added openvswitch windows datapath.

  Based on feedback from the summit this whishlist bug has been created
  in place of a blueprint to  capture the changes required to enable
  the ovs l2 agent to managed the userspace netdev datapath.

  2 new config should be added to allow configuation of ovs and the ovs
  l2 agent.

  cfg.StrOpt('ovs_datapath', default='system', choices=['system','netdev'],
   help=_(ovs datapath to use.)),

  and

  cfg.StrOpt('agent_type', default=q_const.AGENT_TYPE_OVS,  
   choices=[q_const.AGENT_TYPE_OVS, 
q_const.AGENT_TYPE_OVS_DPDK],
   help=_(Selects the Agent Type 
reported))

  the ovs_datapath config option will provided a mechanism at deploy time to 
select which datapath to enable.
  the 'system'(kernel) datapath will be enabled by default as it is today. the 
netdev(userspace) datapath option will enabled the ovs agent to configure and 
manage the netdev data path. this config option will be added to the ovs 
section of the ml2_conf.ini

  the agent_type config option will provided a mechanism to enable coexistence 
of dpdk enabled ovs nodes  and vanilla ovs nodes.
  by allowing a configurable agent_type both the standard openvswitch ml2 
mechanism driver and the ovsdpdk mechanism driver can be used. by default the 
agent_type reported will be unchanged 'Open vSwitch agent'. during deployment 
an operator can chose to spcify an agent_type of 'DPDK OVS Agent' if they have 
deployed a dpdk enabled.

  these are the only changes required to extent the ovs agent to suport
  the netdev datapath.

  documentation and unit tests will be provided to cover these changes.
  a new job can be added to the intel-networking-ci to continue to validate 
this configuration if additional 3rd party 
  testing is desired.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469865] [NEW] oslo.versionedobjects breaks nova/cinder tests

2015-06-29 Thread Davanum Srinivas (DIMS)
Public bug reported:

Version 0.5.0 cut today broke Nova:

http://logs.openstack.org/40/193240/3/check/gate-nova-
python27/98212cb/testr_results.html.gz

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo.versionedobjects
 Importance: High
 Status: Confirmed

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Summary changed:

- oslo.versionedobjects breaks nova tests
+ oslo.versionedobjects breaks nova/cinder tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469865

Title:
  oslo.versionedobjects breaks nova/cinder tests

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo Versioned Objects:
  Confirmed

Bug description:
  Version 0.5.0 cut today broke Nova:

  http://logs.openstack.org/40/193240/3/check/gate-nova-
  python27/98212cb/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1469865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469867] [NEW] Stop using deprecated oslo_utils.timeutils.strtime

2015-06-29 Thread Brant Knudson
Public bug reported:


Keystone unit tests are failing because they're still using the deprecated 
oslo_utils.timeutils.strtime function. We need to stop using the function.

DeprecationWarning: Using function/method
'oslo_utils.timeutils.strtime()' is deprecated in version '1.6' and will
be removed in a future version: use either datetime.datetime.isoformat()
or datetime.datetime.strftime() instead

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469867

Title:
  Stop using deprecated oslo_utils.timeutils.strtime

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  Keystone unit tests are failing because they're still using the deprecated 
oslo_utils.timeutils.strtime function. We need to stop using the function.

  DeprecationWarning: Using function/method
  'oslo_utils.timeutils.strtime()' is deprecated in version '1.6' and
  will be removed in a future version: use either
  datetime.datetime.isoformat() or datetime.datetime.strftime() instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1469867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469867] Re: Stop using deprecated oslo_utils.timeutils.strtime

2015-06-29 Thread Brant Knudson
** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
 Assignee: (unassigned) = Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469867

Title:
  Stop using deprecated oslo_utils.timeutils.strtime

Status in OpenStack Identity (Keystone):
  In Progress
Status in Python client library for Keystone:
  In Progress

Bug description:
  
  Keystone unit tests are failing because they're still using the deprecated 
oslo_utils.timeutils.strtime function. We need to stop using the function.

  DeprecationWarning: Using function/method
  'oslo_utils.timeutils.strtime()' is deprecated in version '1.6' and
  will be removed in a future version: use either
  datetime.datetime.isoformat() or datetime.datetime.strftime() instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1469867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469869] [NEW] Metadata defintions in etc/metadefs are not included in Python packages

2015-06-29 Thread Ian Cordasco
Public bug reported:

The files in etc/metadefs in the Glance repository are not included in
either the tarball or wheel when one runs

python setup.py sdist bdist_wheel

These should be included by default and installed so they can be used.
Since wheels should not be allowed to write to paths outside of the
directory the package is installed in,

glance-manage db_load_metadefs

Should also look in the installed directory path for etc/metadefs when
loading them.

This is a problem in every version of Glance that was meant to include
those metadata definitions. Since this does not prevent functionality
from working (since a user could download the files to /etc/metadefs and
run the command), I do not think this has backport potential.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: metadef

** Description changed:

  The files in etc/metadefs in the Glance repository are not included in
  either the tarball or wheel when one runs
  
- python setup.py sdist bdist_wheel
+ python setup.py sdist bdist_wheel
  
  These should be included by default and installed so they can be used.
  Since wheels should not be allowed to write to paths outside of the
  directory the package is installed in,
  
- glance-manage db_load_metadefs
+ glance-manage db_load_metadefs
  
  Should also look in the installed directory path for etc/metadefs when
  loading them.
+ 
+ This is a problem in every version of Glance that was meant to include
+ those metadata definitions. Since this does not prevent functionality
+ from working (since a user could download the files to /etc/metadefs and
+ run the command), I do not think this has backport potential.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1469869

Title:
  Metadata defintions in etc/metadefs are not included in Python
  packages

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The files in etc/metadefs in the Glance repository are not included in
  either the tarball or wheel when one runs

  python setup.py sdist bdist_wheel

  These should be included by default and installed so they can be used.
  Since wheels should not be allowed to write to paths outside of the
  directory the package is installed in,

  glance-manage db_load_metadefs

  Should also look in the installed directory path for etc/metadefs when
  loading them.

  This is a problem in every version of Glance that was meant to include
  those metadata definitions. Since this does not prevent functionality
  from working (since a user could download the files to /etc/metadefs
  and run the command), I do not think this has backport potential.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1469869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469604] [NEW] LbbasV2-Session persitence HTTP_COOKIE-no cookie sent to client

2015-06-29 Thread Alex Syafeyev
Public bug reported:

we configured lbaasv2. LB , listener are created . Thenext step is the
pool and members

neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener 5dd91024
-148e-4d80-842f-3122725d0164 --protocol HTTP

neutron lbaas-member-create 6b7b5daa-f773-46ca-8045-bb1f8ae8fcec
--protocol-port 80 --subnet 67897b9a-e5dd-405a-80db-7e36ead62c27
--address 192.168.1.3

neutron lbaas-member-create 6b7b5daa-f773-46ca-8045-bb1f8ae8fcec
--protocol-port 80 --subnet 67897b9a-e5dd-405a-80db-7e36ead62c27
--address 192.168.1.4

LB works properly.
After updating the pool to use session persistence with HTTP_COOKIE we captured 
traffic on LB interface and saw that no cookie send towards the clients.

neutron lbaas-pool-update 6b7b5daa-f773-46ca-8045-bb1f8ae8fcec
--session-persistence type=dict type=HTTP_COOKIE

___no session persistence___
.}._.Z~(HTTP/1.1 200 OK
Date: Mon, 29 Jun 2015 09:47:09 GMT
Server: Apache/2.2.15 (Red Hat)
Last-Modified: Mon, 29 Jun 2015 09:45:01 GMT
ETag: 23196-b-519a4f18f88f8
Accept-Ranges: bytes
Content-Length: 11
Content-Type: text/html; charset=UTF-8

___With session persitence configured
~...\.cHTTP/1.1 200 OK
Date: Mon, 29 Jun 2015 09:48:48 GMT
Server: Apache/2.2.15 (Red Hat)
Last-Modified: Mon, 29 Jun 2015 09:45:01 GMT
ETag: 23196-b-519a4f18f88f8
Accept-Ranges: bytes
Content-Length: 11
Content-Type: text/html; charset=UTF-8

NO ADDITIONAL HEADER ADDED

LOGS:
2015-06-29 08:48:02.842 18502 ERROR neutron_lbaas.agent.agent_manager 
[req-1a780c1f-9226-4515-a50d-dc8c0d63cb80 ] Create pool 
6b7b5daa-f773-46ca-8045-bb1f8ae8fcec failed on device driver haproxy_ns
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager Traceback 
(most recent call last):
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py, line 
339, in update_pool
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
driver.pool.update(old_pool, pool)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py,
 line 416, in update
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(new_pool.listener.loadbalancer)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py,
 line 364, in refresh
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager if 
(not self.driver.deploy_instance(loadbalancer) and
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py, line 445, in 
inner
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py,
 line 172, in deploy_instance
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
self.update(loadbalancer)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py,
 line 181, in update
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer, extra_args)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py,
 line 347, in _spawn
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py,
 line 89, in save_config
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py,
 line 221, in render_loadbalancer_obj
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
loadbalancer = _transform_loadbalancer(loadbalancer, haproxy_base_dir)
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py,
 line 236, in _transform_loadbalancer
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager x, 
haproxy_base_dir) for x in loadbalancer.listeners]
2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py,
 line 261, in _transform_listener

[Yahoo-eng-team] [Bug 1469615] [NEW] dhcp service is unavailable if we delete dhcp port

2015-06-29 Thread shihanzhang
Public bug reported:

if we delete the dhcp port,  the dhcp service for corresponding network
is unavailable, because dhcp port is deleted from neutron-server, but
the TAP device on network node is not deleted, and the tag for this TAP
is dead vlan 4095,  and the dhcp service can' t be recoverd.

reproduce steps:
1. create network, subnet
2. delete the dhcp port in this network

I foud the TAP device on network node was not deleted, but its tag is
4095

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469615

Title:
  dhcp service is unavailable if we delete dhcp port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  if we delete the dhcp port,  the dhcp service for corresponding
  network is unavailable, because dhcp port is deleted from neutron-
  server, but the TAP device on network node is not deleted, and the tag
  for this TAP is dead vlan 4095,  and the dhcp service can' t be
  recoverd.

  reproduce steps:
  1. create network, subnet
  2. delete the dhcp port in this network

  I foud the TAP device on network node was not deleted, but its tag is
  4095

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469634] [NEW] 'nosharedpages/' in xml is not recognized by libvirt.

2015-06-29 Thread Rong Han ZTE
Public bug reported:

'nosharedpages/' in xml is not recognized by libvirt. We should modify
nosharedpages to nosharepages.

Because:
[root@nail-SBCJ-5-3-3 libvirt]# virsh list
 IdName   State

 2 instance-0002  running
 7 instance-0007  running

[root@nail-SBCJ-5-3-3 libvirt]# 
[root@nail-SBCJ-5-3-3 libvirt]# 
[root@nail-SBCJ-5-3-3 libvirt]# virsh dumpxml 7
domain type='kvm' id='7'
  nameinstance-0007/name
  uuiddda65962-5d55-403a-bdf3-462a17563a74/uuid
  metadata
nova:instance xmlns:nova=http://openstack.org/xmlns/libvirt/nova/1.0;
  nova:package version=2015.1.0-1.1.44/
  nova:namehanrong5/nova:name
  nova:creationTime2015-06-29 07:30:51/nova:creationTime
  nova:flavor name=hanrong
nova:memory100/nova:memory
nova:disk10/nova:disk
nova:swap0/nova:swap
nova:ephemeral0/nova:ephemeral
nova:vcpus1/nova:vcpus
  /nova:flavor
  nova:owner
nova:user uuid=84ee57c242ca41f09d8a833adc2e2583admin/nova:user
nova:project 
uuid=e35fbf4375b346519d86e2200047ad0eadmin/nova:project
  /nova:owner
  nova:root type=image uuid=e164982b-e53a-4beb-bc69-f660643dce87/
/nova:instance
  /metadata
  memory unit='KiB'102400/memory
  currentMemory unit='KiB'102400/currentMemory
  memoryBacking
nosharepages/
  /memoryBacking
  vcpu placement='static' cpuset='0-3,8-11'1/vcpu
  cputune
shares1024/shares
  /cputune
  resource
partition/machine/partition
  /resource
  sysinfo type='smbios'
system
  entry name='manufacturer'Fedora Project/entry
  entry name='product'OpenStack Nova/entry
  entry name='version'2015.1.0-1.1.44/entry
  entry name='serial'52c08ef4-27eb-422d-a840-7de43ca69827/entry
  entry name='uuid'dda65962-5d55-403a-bdf3-462a17563a74/entry
/system
  /sysinfo
  os
type arch='i686' machine='pc-i440fx-2.2'hvm/type
boot dev='hd'/
smbios mode='sysinfo'/
  /os
  features
acpi/
apic eoi='on'/
  /features
  cpu mode='host-model'
model fallback='allow'/
topology sockets='1' cores='1' threads='1'/
  /cpu
  clock offset='utc'
timer name='pit' tickpolicy='delay'/
timer name='rtc' tickpolicy='catchup'/
timer name='hpet' present='no'/
  /clock
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashdestroy/on_crash
  devices
emulator/usr/bin/qemu-system-x86_64/emulator
disk type='file' device='disk'
  driver name='qemu' type='qcow2' cache='none'/
  source 
file='/var/lib/nova/instances/dda65962-5d55-403a-bdf3-462a17563a74/disk'/
  backingStore type='file' index='1'
format type='raw'/
source 
file='/var/lib/nova/instances/_base/bc3b6a5811a9095948b287f8e145dd833eb4779c'/
backingStore/
  /backingStore
  target dev='vda' bus='virtio'/
  alias name='virtio-disk0'/
  address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/
/disk
controller type='usb' index='0'
  alias name='usb0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01' 
function='0x2'/
/controller
controller type='pci' index='0' model='pci-root'
  alias name='pci.0'/
/controller
interface type='bridge'
  mac address='fa:16:3e:61:0b:a9'/
  source bridge='qbr2f305d6a-81'/
  dpdk use='false' port='0'/
  target dev='tap2f305d6a-81'/
  model type='virtio'/
  alias name='net0'/
  address type='pci' domain='0x' bus='0x00' slot='0x03' 
function='0x0'/
/interface
serial type='file'
  source 
path='/var/lib/nova/instances/dda65962-5d55-403a-bdf3-462a17563a74/console.log'/
  target port='0'/
  alias name='serial0'/
/serial
serial type='pty'
  source path='/dev/pts/6'/
  target port='1'/
  alias name='serial1'/
/serial
console type='file'
  source 
path='/var/lib/nova/instances/dda65962-5d55-403a-bdf3-462a17563a74/console.log'/
  target type='serial' port='0'/
  alias name='serial0'/
/console
input type='tablet' bus='usb'
  alias name='input0'/
/input
input type='mouse' bus='ps2'/
input type='keyboard' bus='ps2'/
graphics type='vnc' port='5901' autoport='yes' listen='0.0.0.0' 
keymap='en-us'
  listen type='address' address='0.0.0.0'/
/graphics
video
  model type='cirrus' vram='16384' heads='1'/
  alias name='video0'/
  address type='pci' domain='0x' bus='0x00' slot='0x02' 
function='0x0'/
/video
memballoon model='virtio'
  stats period='10'/
  alias name='balloon0'/
  address type='pci' domain='0x' bus='0x00' slot='0x05' 
function='0x0'/
/memballoon
  /devices
/domain

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: in-stable-kilo

** Tags added: in-stable-kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is 

[Yahoo-eng-team] [Bug 1451429] Re: Kilo: I/O error uploading image

2015-06-29 Thread Matthias Runge
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451429

Title:
  Kilo: I/O error uploading image

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Using a Ceph backend. Same configuration works just fine on Juno.

  Glance image creation from file using the API works. Horizon image
  creation from URL works too, but using file upload does not.

  Apache error.log:

  Unhandled exception in thread started by function image_update at 
0x7fc97aeed320
  Traceback (most recent call last):
File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 112, in image_update
  exceptions.handle(request, ignore=True)
File /usr/lib/python2.7/dist-packages/horizon/exceptions.py, line 364, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 110, in image_update
  image = glanceclient(request).images.update(image_id, **kwargs)
File /usr/lib/python2.7/dist-packages/glanceclient/v1/images.py, line 
329, in update
  resp, body = self.client.put(url, headers=hdrs, data=image_data)
File /usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 
265, in put
  return self._request('PUT', url, **kwargs)
File /usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 
206, in _request
  **kwargs)
File /usr/lib/python2.7/dist-packages/requests/sessions.py, line 455, in 
request
  resp = self.send(prep, **send_kwargs)
File /usr/lib/python2.7/dist-packages/requests/sessions.py, line 558, in 
send
  r = adapter.send(request, **kwargs)
File /usr/lib/python2.7/dist-packages/requests/adapters.py, line 350, in 
send
  for i in request.body:
File /usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 
170, in chunk_body
  chunk = body.read(CHUNKSIZE)
  ValueError: I/O operation on closed file

  
  Horizon log:

  [req-0e69bfd9-c6ab-4131-b445-aa57c1a455f7 87d1da7fba6f4f5a9d4e7f78da344e91 
ba35660ba55b4a5283c691a4c6d99f23 - - -] Failed to upload image 
90a30bfb-946c-489d-9a04-5f601af0f821
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py, line 
113, in upload_data_to_store
  context=req.context)
File /usr/lib/python2.7/dist-packages/glance_store/backend.py, line 339, 
in store_add_to_backend
  context=context)
File /usr/lib/python2.7/dist-packages/glance_store/capabilities.py, line 
226, in op_checker
  return store_op_fun(store, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/glance_store/_drivers/rbd.py, line 
384, in add
  self._delete_image(loc.image, loc.snapshot)
File /usr/lib/python2.7/dist-packages/glance_store/_drivers/rbd.py, line 
290, in _delete_image
  with conn.open_ioctx(target_pool) as ioctx:
File /usr/lib/python2.7/dist-packages/rados.py, line 667, in open_ioctx
  raise make_ex(ret, error opening pool '%s' % ioctx_name)
  ObjectNotFound: error opening pool '90a30bfb-946c-489d-9a04-5f601af0f821'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444108] Re: Upgrade of services from Juno to Kilo fails

2015-06-29 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1444108

Title:
  Upgrade of services from Juno to Kilo fails

Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  Steps to reproduce:

  1. Install glance, cinder, nova and keystone services on Stable/Juno branch.
  2. Update the branch to master, do 'sudo python setup.py install’ on all the 
services  and do a db sync on all the services.
  3. Restart all the services.
  4. The nova and cinder services restart properly. In fact, I am able to do 
all sanity testing operations. Keystone though intermittently stops working. 
And doing cinder-list or nova-list gives either 
http://paste.openstack.org/show/yR4bFyzry9Lrhfp6nPWg/ or gives the listing of 
the volumes and instances on the host.
  5. What I concluded was that Upgrading from Juno to Kilo has not been 
documented anywhere yet. I think I might be missing some secret sauce for 
keystone to start working correctly and stop giving intermittent errors. So, is 
there an official documentation like the one that exists for Icehouse to Kilo 
upgrade 
(http://docs.openstack.org/openstack-ops/content/upgrade-icehouse-juno.html) ? 
If not, then am I missing something in /etc/keystone/keystone.conf ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1444108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469858] [NEW] Update contributing.rst for eslint

2015-06-29 Thread Aaron Sahlin
Public bug reported:

In our contributing documentation (doc/source/contributing.rst) we
discuss and give instructions for installing and setting up JSHint for
development.  This is not a bad thing, but with the switch in Horizon to
use eslint instead of jscs and jshint in our tool chain
(https://review.openstack.org/#/c/192327/)  we should give instructions
on how to setup and install eslint in your development environment
instead to match what the gate is checking for.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  In our contributing documentation (doc/source/contributing.rst) we
  discuss and give instructions for installing and setting up JSHint for
  development.  This is not a bad thing, but with the switch in Horizon to
  use eslint instead of jscs and jshint in our tool chain
  (https://review.openstack.org/#/c/192327/)  we should give instructions
- on how to setup and install eslint in you development environment
+ on how to setup and install eslint in your development environment
  instead to match what the gate is checking for.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1469858

Title:
  Update contributing.rst for eslint

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In our contributing documentation (doc/source/contributing.rst) we
  discuss and give instructions for installing and setting up JSHint for
  development.  This is not a bad thing, but with the switch in Horizon
  to use eslint instead of jscs and jshint in our tool chain
  (https://review.openstack.org/#/c/192327/)  we should give
  instructions on how to setup and install eslint in your development
  environment instead to match what the gate is checking for.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1469858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467791] Re: Specific ipv4 subnet to create port and return port contains ipv4 and ipv6 address

2015-06-29 Thread Sean M. Collins
** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467791

Title:
  Specific ipv4 subnet to create port and return port contains ipv4 and
  ipv6 address

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  neutron --version
  2.6.0

  When a network named 'A' contains a ipv4 subnet and a ipv6 subnet with
  ipv6_address_mode and ipv6_ra_mode are 'slaac'/'dhcpv6-stateless'. I
  specify the ipv4 subnet without ipv4 addr to create a port based on
  this network 'A'. The returned port contains 2 addresses (both ipv4
  and ipv6).

  It just like:
   | fixed_ips | {subnet_id: 
990f3f53-ee5b-4a6f-b362-34fc339ab6e5, ip_address: 10.0.0.5}
   |  | {subnet_id: 
59464306-6781-4d33-b10c-214f4ba30e6c, ip_address: fd00:dd95:d03f:0:f81

  But the result which expected is just :
   | fixed_ips | {subnet_id: 
990f3f53-ee5b-4a6f-b362-34fc339ab6e5, ip_address: 10.0.0.5}

  repo:
    1.create a network
    2.create a ipv4 subnet and a ipv6 subnet into this network.And specify 
ipv6_address_mode and ipv6_ra_mode are 'slaac'/'dhcpv6-stateless'.
    3.run the command
    neutron port-create --fixed-ip subnet_id=$[ipv4_subnet_id] 
$[network_id/name]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp