[Yahoo-eng-team] [Bug 1406854] [NEW] Loadbalancer page is not coming up in case subnet is deleted without deleting the pool

2014-12-31 Thread venkat
Public bug reported:

Openstack: ICEHOUSE

Description:

UI page for Project ->Network->Load Balancers is not displayed.


Steps:
>
1. Created a provider offering (type: LoadBalancer, driver: LB)

2. Created a network, and subnet.

3. Created a pool with subnet created above.

4. Deleted the subnet without deleting the pool assoicated.

5. Now the UI page for Project ->Network->Load Balancers is not
displayed and thrown standard error page "something went wrong".


Observed that subnet id is still attached as a part of the pool.


root@os-158:~# neutron lb-pool-list
+--++---+-+--+++
| id   | name   | provider  | lb_method   |
protocol | admin_state_up | status |
+--++---+-+--+++
| e8ff074c-64ae-468c-a2c3-51b0c0306b71 | pool_1 | haproxy_on_vm | ROUND_ROBIN |
HTTP | True   | ACTIVE |
+--++---+-+--+++
root@os-158:~# 
root@os-158:~# neutron lb-pool-show e8ff074c-64ae-468c-a2c3-51b0c0306b71
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| health_monitors|  |
| health_monitors_status |  |
| id | e8ff074c-64ae-468c-a2c3-51b0c0306b71 |
| lb_method  | ROUND_ROBIN  |
| members|  |
| name   | pool_1   |
| protocol   | HTTP |
| provider   | haproxy_on_vm|
| status | ACTIVE   |
| status_description |  |
| subnet_id  | 274c1806-da49-4eef-b573-6f30783e490e |
| tenant_id  | 7799606d678e438f98f339fb0553fb10 |
| vip_id |  |
++--+
root@os-158:~# 
root@os-158:~# 
root@os-158:~# neutron subnet-show 274c1806-da49-4eef-b573-6f30783e490e
Unable to find subnet with name '274c1806-da49-4eef-b573-6f30783e490e'
root@os-158:~#

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1406854

Title:
  Loadbalancer page is not coming up in case subnet is deleted without
  deleting the pool

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Openstack: ICEHOUSE

  Description:

  UI page for Project ->Network->Load Balancers is not displayed.

  
  Steps:
  >
  1. Created a provider offering (type: LoadBalancer, driver: LB)

  2. Created a network, and subnet.

  3. Created a pool with subnet created above.

  4. Deleted the subnet without deleting the pool assoicated.

  5. Now the UI page for Project ->Network->Load Balancers is not
  displayed and thrown standard error page "something went wrong".

  
  Observed that subnet id is still attached as a part of the pool.

  
  root@os-158:~# neutron lb-pool-list
  
+--++---+-+--+++
  | id   | name   | provider  | lb_method   
|
  protocol | admin_state_up | status |
  
+--++---+-+--+++
  | e8ff074c-64ae-468c-a2c3-51b0c0306b71 | pool_1 | haproxy_on_vm | ROUND_ROBIN 
|
  HTTP | True   | ACTIVE |
  
+--++---+-+--+++
  root@os-158:~# 
  root@os-158:~# neutron lb-pool-show e8ff074c-64ae-468c-a2c3-51b0c0306b71
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | health_monitors|  |
  | health_monitors_status |  |
  | id | e8ff074c-64ae-468c-a2c3-51b0c0306b71 |
  | lb_method  | ROUND_ROB

[Yahoo-eng-team] [Bug 1406826] [NEW] master keystone.conf sample is out of sync

2014-12-31 Thread Henry Nash
Public bug reported:

Looks like an update to keystone.common.policy has not been reflected in
our keystone.conf sample, leading to this change being included in other
commits.

** Affects: keystone
 Importance: High
 Assignee: Henry Nash (henry-nash)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1406826

Title:
  master keystone.conf sample is out of sync

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Looks like an update to keystone.common.policy has not been reflected
  in our keystone.conf sample, leading to this change being included in
  other commits.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1406826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406784] Re: Can't create volume from non-raw image

2014-12-31 Thread John Griffith
I think this is up to your install or distribution that you're using.
In other words, Cinder does not install packages, that's deployment.
What you're reporting here is not a bug, if there's no info in the docs
about installing qemu tools that is possibly something we could add.

What OpenStack distribution are you using?

** Changed in: cinder
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406784

Title:
  Can't create volume from non-raw image

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  1. Create an image using a non-raw image (qcow2 or vmdk is ok)
  2. Copy the image to a volume,  and failed.

  Log:
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 363, in 
create_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
_run_flow()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 356, in 
_run_flow
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
flow_engine.run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/lock_utils.py", line 53, in 
wrapper
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 111, in run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 121, in _run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._revert(misc.Failure())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 78, in _revert
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
misc.Failure.reraise_if_any(failures.values())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py", line 558, in 
reraise_if_any
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
failures[0].reraise()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py", line 565, in reraise
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", 
line 36, in _execute_task
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
task.execute(**arguments)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 594, in execute
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
**volume_spec)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 556, in _create_from_image
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
image_id, image_location, image_service)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 463, in _copy_image_to_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher raise 
e

[Yahoo-eng-team] [Bug 1406784] [NEW] Can't create volume from non-raw image

2014-12-31 Thread Yang Luo
Public bug reported:

1. Create an image using a non-raw image (qcow2 or vmdk is ok)
2. Copy the image to a volume,  and failed.

Log:
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 363, in 
create_volume
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher _run_flow()
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 356, in 
_run_flow
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
flow_engine.run()
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/lock_utils.py", line 53, in 
wrapper
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 111, in run
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher self._run()
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 121, in _run
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._revert(misc.Failure())
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 78, in _revert
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
misc.Failure.reraise_if_any(failures.values())
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py", line 558, in 
reraise_if_any
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
failures[0].reraise()
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py", line 565, in reraise
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", 
line 36, in _execute_task
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
task.execute(**arguments)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 594, in execute
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
**volume_spec)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 556, in _create_from_image
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher image_id, 
image_location, image_service)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 463, in _copy_image_to_volume
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher raise 
exception.ImageUnacceptable(ex)
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
ImageUnacceptable: Image 92fad7ae-6439-4c69-bdf4-4c6cc5759225 is unacceptable: 
qemu-img is not installed and image is of type vmdk.  Only RAW images can be 
used if qemu-img is not installed.
2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
2014-12-31 07:06:09.307 2159 ERROR oslo.messaging._drivers.common 
[req-2e2ded9a-e9ac-4996-b5e6-5c52ce41a05b 8481fe632326487db70e308ae070040f 
eaca1af2b7b74cdfaf1e61c081d6d255 - - -] Returning exception Image 
92fad7ae-6439-4c69-bdf4-4c6cc5759225 is unacceptable: qemu-img is not installed 
and image is of type vmdk.  Only RAW images can be used if qemu-img is not 
installed. to caller


Cause

[Yahoo-eng-team] [Bug 1406776] [NEW] Trying to delete a grant with an invalid role ID causes unnecessary processing

2014-12-31 Thread Henry Nash
Public bug reported:

Trying to delete a grant with an invalid role ID will throw a
RoleNotFound exception.  However, the check for this is buried in the
driver...after the time the assignment manager has already carried out a
bunch of processing (e.g. send out revokes). Is this by design (e.g. let
people clear up tokens for a role ID that somehow has been already
deleted) or just an error?  Given that some processing for the revoking
tokens also happens AFTER the driver call to delete the grant (which
would abort on RoleNotFound), I'm kind of guessing the later.  Views?

** Affects: keystone
 Importance: Undecided
 Assignee: Henry Nash (henry-nash)
 Status: New

** Description changed:

  Trying to delete a grant with an invalid role ID will throw a
  RoleNotFound exception.  However, the check for this is buried in the
  driver...after the time the manager has already carried out a bunch of
- processing (e.g. send out a bunch of revokes). Is this by design (e.g.
- let people clear up tokens for a role ID that has someone been already
- delete) or just an error?  Given that some processing for the revoking
- tokens also happens AFTER the driver call to delete the grant (which
- would abort on RoleNotFound), I'm kind of guessing the later.  Views?
+ processing (e.g. send out revokes). Is this by design (e.g. let people
+ clear up tokens for a role ID that somehow has been already deleted) or
+ just an error?  Given that some processing for the revoking tokens also
+ happens AFTER the driver call to delete the grant (which would abort on
+ RoleNotFound), I'm kind of guessing the later.  Views?

** Description changed:

  Trying to delete a grant with an invalid role ID will throw a
  RoleNotFound exception.  However, the check for this is buried in the
- driver...after the time the manager has already carried out a bunch of
- processing (e.g. send out revokes). Is this by design (e.g. let people
- clear up tokens for a role ID that somehow has been already deleted) or
- just an error?  Given that some processing for the revoking tokens also
- happens AFTER the driver call to delete the grant (which would abort on
- RoleNotFound), I'm kind of guessing the later.  Views?
+ driver...after the time the assignment manager has already carried out a
+ bunch of processing (e.g. send out revokes). Is this by design (e.g. let
+ people clear up tokens for a role ID that somehow has been already
+ deleted) or just an error?  Given that some processing for the revoking
+ tokens also happens AFTER the driver call to delete the grant (which
+ would abort on RoleNotFound), I'm kind of guessing the later.  Views?

** Changed in: keystone
 Assignee: (unassigned) => Henry Nash (henry-nash)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1406776

Title:
  Trying to delete a grant with an invalid role ID causes unnecessary
  processing

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Trying to delete a grant with an invalid role ID will throw a
  RoleNotFound exception.  However, the check for this is buried in the
  driver...after the time the assignment manager has already carried out
  a bunch of processing (e.g. send out revokes). Is this by design (e.g.
  let people clear up tokens for a role ID that somehow has been already
  deleted) or just an error?  Given that some processing for the
  revoking tokens also happens AFTER the driver call to delete the grant
  (which would abort on RoleNotFound), I'm kind of guessing the later.
  Views?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1406776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398656] Re: ceilometer import oslo.concurrency failed issue

2014-12-31 Thread Zhi Yan Liu
** Also affects: glance-store
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398656

Title:
  ceilometer import oslo.concurrency failed issue

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Committed
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in OpenStack Glance backend store-drivers library (glance_store):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  during ceilometer review and find jenkis failed with following
  message:

  2014-12-03 01:28:04.969 | pep8 runtests: PYTHONHASHSEED='0'
  2014-12-03 01:28:04.969 | pep8 runtests: commands[0] | flake8
  2014-12-03 01:28:04.970 |   /home/jenkins/workspace/gate-ceilometer-pep8$ 
/home/jenkins/workspace/gate-ceilometer-pep8/.tox/pep8/bin/flake8 
  2014-12-03 01:28:21.508 | ./ceilometer/utils.py:30:1: H302  import only 
modules.'from oslo.concurrency import processutils' does not import a module
  2014-12-03 01:28:21.508 | from oslo.concurrency import processutils
  2014-12-03 01:28:21.508 | ^
  2014-12-03 01:28:21.508 | ./ceilometer/ipmi/platform/ipmitool.py:19:1: H302  
import only modules.'from oslo.concurrency import processutils' does not import 
a module
  2014-12-03 01:28:21.508 | from oslo.concurrency import processutils
  2014-12-03 01:28:21.508 | ^
  2014-12-03 01:28:21.696 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-ceilometer-pep8/.tox/pep8/bin/flake8'
  2014-12-03 01:28:21.697 | pep8 runtests: commands[1] | flake8 
--filename=ceilometer-* bin

  
  This seems 

  
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/processutils.py

  should change to

  from  oslo_concurrency import processutils

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1398656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406746] [NEW] Missing documentation for {get, set, delete}_image_location

2014-12-31 Thread Yanis Guenane
Public bug reported:

With the recent announce of OSSA-2014-01 [1], as a user I wanted to know
which policy I could configure to limit attacks based on the OSSA but I
couldn't find anything in the documenations.

[1] http://lists.openstack.org/pipermail/openstack-
announce/2014-December/000317.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1406746

Title:
  Missing documentation for {get,set,delete}_image_location

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  With the recent announce of OSSA-2014-01 [1], as a user I wanted to
  know which policy I could configure to limit attacks based on the OSSA
  but I couldn't find anything in the documenations.

  [1] http://lists.openstack.org/pipermail/openstack-
  announce/2014-December/000317.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1406746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406723] [NEW] alembic migration fails to drop a table at drop mlnx plugin

2014-12-31 Thread Amit Ugol
Public bug reported:

The migration fails for 28c0ffb8ebbd_remove_mlnx_plugin.py when trying
to drop the table port_profile (I'm guessing it has to do with the new
MariaDB on F21 - mariadb-server-10.0.14-8.fc21.x86_64).

The table itself is created like so:

MariaDB [neutron]> SHOW CREATE TABLE port_profile\G;
*** 1. row ***
   Table: port_profile
Create Table: CREATE TABLE `port_profile` (
  `port_id` varchar(36) NOT NULL,
  `vnic_type` varchar(32) NOT NULL,
  PRIMARY KEY (`port_id`),
  CONSTRAINT `port_profile_ibfk_1` FOREIGN KEY (`port_id`) REFERENCES `ports` 
(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

2014-12-31 07:24:55.497 | INFO  [alembic.migration] Context impl MySQLImpl.
2014-12-31 07:24:55.497 | INFO  [alembic.migration] Will assume 
non-transactional DDL.
2014-12-31 07:24:56.249 | INFO  [alembic.migration] Running upgrade  -> havana, 
havana_initial
2014-12-31 07:25:31.259 | INFO  [alembic.migration] Running upgrade havana -> 
e197124d4b9, add unique constraint to members
2014-12-31 07:25:31.620 | INFO  [alembic.migration] Running upgrade e197124d4b9 
-> 1fcfc149aca4, Add a unique constraint on (agent_type, host) columns to 
prevent a race
2014-12-31 07:25:31.620 | condition when an agent entry is 'upserted'.
2014-12-31 07:25:31.904 | INFO  [alembic.migration] Running upgrade 
1fcfc149aca4 -> 50e86cb2637a, nsx_mappings
2014-12-31 07:25:32.348 | INFO  [alembic.migration] Running upgrade 
50e86cb2637a -> 1421183d533f, NSX DHCP/metadata support
2014-12-31 07:25:33.033 | INFO  [alembic.migration] Running upgrade 
1421183d533f -> 3d3cb89d84ee, nsx_switch_mappings
2014-12-31 07:25:33.326 | INFO  [alembic.migration] Running upgrade 
3d3cb89d84ee -> 4ca36cfc898c, nsx_router_mappings
2014-12-31 07:25:33.636 | INFO  [alembic.migration] Running upgrade 
4ca36cfc898c -> 27cc183af192, ml2_vnic_type
2014-12-31 07:25:34.155 | INFO  [alembic.migration] Running upgrade 
27cc183af192 -> 50d5ba354c23, ml2 binding:vif_details
2014-12-31 07:25:35.085 | INFO  [alembic.migration] Running upgrade 
50d5ba354c23 -> 157a5d299379, ml2 binding:profile
2014-12-31 07:25:35.612 | INFO  [alembic.migration] Running upgrade 
157a5d299379 -> 3d2585038b95, VMware NSX rebranding
2014-12-31 07:25:35.905 | INFO  [alembic.migration] Running upgrade 
3d2585038b95 -> abc88c33f74f, lb stats
2014-12-31 07:25:38.415 | INFO  [alembic.migration] Running upgrade 
abc88c33f74f -> 1b2580001654, nsx_sec_group_mapping
2014-12-31 07:25:38.708 | INFO  [alembic.migration] Running upgrade 
1b2580001654 -> e766b19a3bb, nuage_initial
2014-12-31 07:25:40.197 | INFO  [alembic.migration] Running upgrade e766b19a3bb 
-> 2eeaf963a447, floatingip_status
2014-12-31 07:25:41.360 | INFO  [alembic.migration] Running upgrade 
2eeaf963a447 -> 492a106273f8, Brocade ML2 Mech. Driver
2014-12-31 07:25:41.987 | INFO  [alembic.migration] Running upgrade 
492a106273f8 -> 24c7ea5160d7, Cisco CSR VPNaaS
2014-12-31 07:25:42.297 | INFO  [alembic.migration] Running upgrade 
24c7ea5160d7 -> 81c553f3776c, bsn_consistencyhashes
2014-12-31 07:25:42.564 | INFO  [alembic.migration] Running upgrade 
81c553f3776c -> 117643811bca, nec: delete old ofc mapping tables
2014-12-31 07:25:43.404 | INFO  [alembic.migration] Running upgrade 
117643811bca -> 19180cf98af6, nsx_gw_devices
2014-12-31 07:25:44.099 | INFO  [alembic.migration] Running upgrade 
19180cf98af6 -> 33dd0a9fa487, embrane_lbaas_driver
2014-12-31 07:25:44.433 | INFO  [alembic.migration] Running upgrade 
33dd0a9fa487 -> 2447ad0e9585, Add IPv6 Subnet properties
2014-12-31 07:25:45.413 | INFO  [alembic.migration] Running upgrade 
2447ad0e9585 -> 538732fa21e1, NEC Rename quantum_id to neutron_id
2014-12-31 07:25:45.706 | INFO  [alembic.migration] Running upgrade 
538732fa21e1 -> 5ac1c354a051, n1kv segment allocs for cisco n1kv plugin
2014-12-31 07:25:47.941 | INFO  [alembic.migration] Running upgrade 
5ac1c354a051 -> icehouse, icehouse
2014-12-31 07:25:47.983 | INFO  [alembic.migration] Running upgrade icehouse -> 
54f7549a0e5f, set_not_null_peer_address
2014-12-31 07:25:48.058 | INFO  [alembic.migration] Running upgrade 
54f7549a0e5f -> 1e5dd1d09b22, set_not_null_fields_lb_stats
2014-12-31 07:25:49.793 | INFO  [alembic.migration] Running upgrade 
1e5dd1d09b22 -> b65aa907aec, set_length_of_protocol_field
2014-12-31 07:25:49.860 | INFO  [alembic.migration] Running upgrade b65aa907aec 
-> 33c3db036fe4, set_length_of_description_field_metering
2014-12-31 07:25:49.918 | INFO  [alembic.migration] Running upgrade 
33c3db036fe4 -> 4eca4a84f08a, Remove ML2 Cisco Credentials DB
2014-12-31 07:25:50.094 | INFO  [alembic.migration] Running upgrade 
4eca4a84f08a -> d06e871c0d5, set_admin_state_up_not_null_ml2
2014-12-31 07:25:50.563 | INFO  [alembic.migration] Running upgrade d06e871c0d5 
-> 6be312499f9, set_not_null_vlan_id_cisco
2014-12-31 07:25:50.622 | INFO  [alembic.migration] Running upgrade 6be312499f9 
-> 1b837a7125a9, Cisco APIC Mechanism D

[Yahoo-eng-team] [Bug 1406722] [NEW] Creating endpoint using service without name causes getting token fail

2014-12-31 Thread Zhiyuan Cai
Public bug reported:

Steps to reproduce:

1. Create a service without name using v2 api (Keystone client doesn't allow 
creating a service without name but it's OK for REST api).
curl -i -X POST http://localhost:35357/v2.0/OS-KSADM/services -H "Accept: 
application/json" -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" 
-d '{ "OS-KSADM:service": { "type": "test" } }'

2. Create a endpoint with this service using v2 api.
curl -i -X POST http://localhost:35357/v2.0/endpoints -H "Content-Type: 
application/json" -H "Accept: application/json" -H "X-Auth-Token: $TOKEN" -d '{ 
"endpoint": { "adminurl": null, "service_id": 
"6d86286bbc744a32b9295c542fefb4ed", "region": "regionOne", "internalurl": null, 
"publicurl": "http://localhost:"; } }'

3. Try to get a new token using v2 api and fail.
curl -i -H "Content-Type: application/json" -d '{ "auth": { "tenantName": 
"demo", "passwordCredentials": { "username": "admin", "password": "$PASSWORD" } 
} }' http://localhost:5000/v2.0/tokens

HTTP/1.1 500 Internal Server Error
Date: Wed, 31 Dec 2014 07:32:36 GMT
Server: Apache/2.4.7 (Ubuntu)
Vary: X-Auth-Token
Content-Length: 232
Connection: close
Content-Type: application/json

{"error": {"message": "An unexpected error prevented the server from
fulfilling your request: 'Service' object has no attribute 'name'
(Disable debug mode to suppress these details.)", "code": 500, "title":
"Internal Server Error"}}

The problem is that, get token v2 api will also return catalog
information. Endpoints are traversed to form the catalog dict. When the
name attribute of the un-named service is accessed, an error occurs. V3
api doesn't have this problem since name attribute is checked if none
before access.

** Affects: keystone
 Importance: Undecided
 Assignee: Zhiyuan Cai (luckyvega-g)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Zhiyuan Cai (luckyvega-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1406722

Title:
  Creating endpoint using service without name causes getting token fail

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Steps to reproduce:

  1. Create a service without name using v2 api (Keystone client doesn't allow 
creating a service without name but it's OK for REST api).
  curl -i -X POST http://localhost:35357/v2.0/OS-KSADM/services -H "Accept: 
application/json" -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" 
-d '{ "OS-KSADM:service": { "type": "test" } }'

  2. Create a endpoint with this service using v2 api.
  curl -i -X POST http://localhost:35357/v2.0/endpoints -H "Content-Type: 
application/json" -H "Accept: application/json" -H "X-Auth-Token: $TOKEN" -d '{ 
"endpoint": { "adminurl": null, "service_id": 
"6d86286bbc744a32b9295c542fefb4ed", "region": "regionOne", "internalurl": null, 
"publicurl": "http://localhost:"; } }'

  3. Try to get a new token using v2 api and fail.
  curl -i -H "Content-Type: application/json" -d '{ "auth": { "tenantName": 
"demo", "passwordCredentials": { "username": "admin", "password": "$PASSWORD" } 
} }' http://localhost:5000/v2.0/tokens

  HTTP/1.1 500 Internal Server Error
  Date: Wed, 31 Dec 2014 07:32:36 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  Content-Length: 232
  Connection: close
  Content-Type: application/json

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request: 'Service' object has no attribute 'name'
  (Disable debug mode to suppress these details.)", "code": 500,
  "title": "Internal Server Error"}}

  The problem is that, get token v2 api will also return catalog
  information. Endpoints are traversed to form the catalog dict. When
  the name attribute of the un-named service is accessed, an error
  occurs. V3 api doesn't have this problem since name attribute is
  checked if none before access.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1406722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406721] [NEW] RoleNotFound exception not tested for grant APIs

2014-12-31 Thread Henry Nash
Public bug reported:

In general, our unit testing of granting assignments does not test that
we throw a RoleNotFound exception if the role_id supplied  does not
exit. While it might argued that the assignment APIs shouldn't validate
this (like we don't validate user/grouo ids), currently the spec (and
the code) says that we do.  So we should test for this case.  This test
will be important to ensure functionality does;t change as we split role
management into its own backend (and hence the checking of valid role_id
is lifted up into the manager)

** Affects: keystone
 Importance: Wishlist
 Assignee: Henry Nash (henry-nash)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1406721

Title:
  RoleNotFound exception not tested for grant APIs

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In general, our unit testing of granting assignments does not test
  that we throw a RoleNotFound exception if the role_id supplied  does
  not exit. While it might argued that the assignment APIs shouldn't
  validate this (like we don't validate user/grouo ids), currently the
  spec (and the code) says that we do.  So we should test for this case.
  This test will be important to ensure functionality does;t change as
  we split role management into its own backend (and hence the checking
  of valid role_id is lifted up into the manager)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1406721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp