[Yahoo-eng-team] [Bug 1348046] Re: glance image created successfully but it is not displaying through glance image-list

2014-07-24 Thread Ankur Gupta
My apologies for my lack of knowledge.  
I got my answers.

If I excute command glance image-list default it will be excuted with
--is-public=TRUE option and if I want to check for non-public images,
I should excute command- glance image-list --is-public=FALSE.

-necadmin@controller:~$ glance image-list
+--++-+--+--++
| ID   | Name   | Disk Format | Container 
Format | Size | Status |
+--++-+--+--++
| 8f70f9f8-1b4b-41c3-ac51-5ebe223c00f2 | myimage3   | raw | bare
 | 13167616 | active |
| ef884e27-54f7-4ec3-b355-35beef53fedb | myimage6   | raw | bare
 | 13167616 | active |
| 4c5d2462-ead3-48f5-9fda-012697b40f67 | ubuntu.iso | iso | bare
 | 38797312 | active |
+--++-+--+--++

necadmin@controller:~$ glance image-list --is-public=FALSE
+--+--+-+--+--++
| ID   | Name | Disk Format | Container 
Format | Size | Status |
+--+--+-+--+--++
| 55771b7b-08af-448d-8a7f-204759d5768d | myimage  | qcow2   | bare  
   | 13167616 | active |
| e0d1f097-2d1b-4ab8-a1a9-96dc70426682 | myimage1 | qcow2   | bare  
   | 13167616 | active |
| e9cc2150-7e7a-4a8f-92cc-f12267e13ceb | myimage4 | raw | bare  
   | 13167616 | active |
| 3af7b4bc-8240-4b63-9880-e8b1b2606426 | myimage5 | raw | bare  
   | 13167616 | active |
+--+--+-+--+--++

necadmin@controller:~$ glance image-list --is-public=TRUE
+--++-+--+--++
| ID   | Name   | Disk Format | Container 
Format | Size | Status |
+--++-+--+--++
| 8f70f9f8-1b4b-41c3-ac51-5ebe223c00f2 | myimage3   | raw | bare
 | 13167616 | active |
| ef884e27-54f7-4ec3-b355-35beef53fedb | myimage6   | raw | bare
 | 13167616 | active |
| 4c5d2462-ead3-48f5-9fda-012697b40f67 | ubuntu.iso | iso | bare
 | 38797312 | active |
+--++-+--+--++


** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1348046

Title:
  glance image created successfully but it is not displaying through
  glance image-list

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  I created glance image through below CLI-

  glance image-create --name myimage1 --disk-format=raw --container-format=bare 
--location=http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2014-07-24T05:58:56  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | raw  |
  | id   | e0d1f097-2d1b-4ab8-a1a9-96dc70426682 |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | myimage1 |
  | owner| None |
  | protected| False|
  | size | 13167616 |
  | status   | active   |
  | updated_at   | 2014-07-24T05:58:58  |
  | virtual_size | None |
  +--+--+


  Check image List through glance image-list  --
  necadmin@controller:~$ glance image-list
  
+--++-+--+--++
  | ID   | Name   | Disk Format | Container 
Format | Size | 

[Yahoo-eng-team] [Bug 1348056] [NEW] Neutron network API throws error code 500 when an Invalid VLAN is provided (should throw 400)

2014-07-24 Thread Sudipta Biswas
Public bug reported:

The neutron network API currently throws a error code 500 for an invalid
input against the VLAN field.

The error can be reproduced by having the following JSON request body:

{
network: {
admin_state_up: false,
provider:segmentation_id: abc,
name: Network1,
provider:physical_network: XYZ,
provider:network_type: vlan
}
}

An error code 400 should be thrown much like how it is thrown for the
other fields - if they correspond to incorrect values.

** Affects: neutron
 Importance: Undecided
 Assignee: Sudipta Biswas (sbiswas7)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Sudipta Biswas (sbiswas7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348056

Title:
  Neutron network API throws error code 500 when an Invalid VLAN is
  provided (should throw 400)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The neutron network API currently throws a error code 500 for an
  invalid input against the VLAN field.

  The error can be reproduced by having the following JSON request body:

  {
  network: {
  admin_state_up: false,
  provider:segmentation_id: abc,
  name: Network1,
  provider:physical_network: XYZ,
  provider:network_type: vlan
  }
  }

  An error code 400 should be thrown much like how it is thrown for the
  other fields - if they correspond to incorrect values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348060] [NEW] stack resource status text is over the stack status text

2014-07-24 Thread Amit Ugol
Public bug reported:

When hovering with the mouse cursor over a stack resource, it prints the 
resource status text under the stack status text.
If the status is too long and breaks to a number of rows, the resource status 
text will be printed on top of it.

Check the screenshot at http://imgur.com/pFvQNQ8

Tested with devstack, last git commit:
  commit 1f879455f82f0f095548906e4a87e0b580350f8d
  Merge: 2005b5b 2f2312d 
  Author: Jenkins jenk...@review.openstack.org
  Date:   Sat Jul 19 08:16:50 2014 +

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- stack resource status test is over the stack status text
+ stack resource status text is over the stack status text

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348060

Title:
  stack resource status text is over the stack status text

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When hovering with the mouse cursor over a stack resource, it prints the 
resource status text under the stack status text.
  If the status is too long and breaks to a number of rows, the resource status 
text will be printed on top of it.

  Check the screenshot at http://imgur.com/pFvQNQ8

  Tested with devstack, last git commit:
commit 1f879455f82f0f095548906e4a87e0b580350f8d
Merge: 2005b5b 2f2312d 
Author: Jenkins jenk...@review.openstack.org
Date:   Sat Jul 19 08:16:50 2014 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348063] [NEW] Testing of results of entity lists does not check that the command part of the url is in the 'self' link

2014-07-24 Thread Alexey Miroshkin
Public bug reported:

Methods assertValidListResponse and consequently assertValidListLinks of
the unit test RestfulTestCase (test_v3.py) don't have any information
about request, so the  check of 'self' url in links collection is very
general:

self.assertThat(links['self'], matchers.StartsWith('http://localhost'))

To implement a proper fix of the bug 1195037 (Self link in v3
collections omits any url filters) we need to pass some request data, at
least the command part of the url, for example:

def assertValidListLinks(self, links, command=None)
def assertValidListResponse(self, resp, key, entity_validator, ref=None,
expected_length=None, keys_to_check=None,
command=None)

as result a proper check would be possible:

if command:
self.assertThat(links['self'], matchers.EndsWith(command))

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1348063

Title:
  Testing of results of entity lists does not check that the command
  part of the url is in the 'self' link

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Methods assertValidListResponse and consequently assertValidListLinks
  of the unit test RestfulTestCase (test_v3.py) don't have any
  information about request, so the  check of 'self' url in links
  collection is very general:

  self.assertThat(links['self'],
  matchers.StartsWith('http://localhost'))

  To implement a proper fix of the bug 1195037 (Self link in v3
  collections omits any url filters) we need to pass some request data,
  at least the command part of the url, for example:

  def assertValidListLinks(self, links, command=None)
  def assertValidListResponse(self, resp, key, entity_validator, ref=None,
  expected_length=None, keys_to_check=None,
  command=None)

  as result a proper check would be possible:

  if command:
  self.assertThat(links['self'], matchers.EndsWith(command))

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1348063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348075] [NEW] per-feature extension method in api/neutron.py should be removed

2014-07-24 Thread Akihiro Motoki
Public bug reported:

In api/neutron.py, is_x_exntension_supproted() is defined per feature, but 
this style doesn't scale.
is_extension_supported() is memoized and if there is no special reason 
is_extension_supported() should be used directly.

is_quotas_extension_supported() is an exception because it has extra
logic.

We already have rough consensus on the direction.
http://eavesdrop.openstack.org/meetings/horizon/2014/horizon.2014-07-22-16.00.log.html#l-140

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New

** Description changed:

  In api/neutron.py, is_x_exntension_supproted() is defined per feature, 
but this style doesn't scale.
  is_extension_supported() is memoized and if there is no special reason 
is_extension_supported() should be used directly.
  
  is_quotas_extension_supported() is an exception because it has extra
  logic.
+ 
+ We already have rough consensus on the direction.
+ 
http://eavesdrop.openstack.org/meetings/horizon/2014/horizon.2014-07-22-16.00.log.html#l-140

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348075

Title:
  per-feature extension method in api/neutron.py should be removed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In api/neutron.py, is_x_exntension_supproted() is defined per feature, 
but this style doesn't scale.
  is_extension_supported() is memoized and if there is no special reason 
is_extension_supported() should be used directly.

  is_quotas_extension_supported() is an exception because it has extra
  logic.

  We already have rough consensus on the direction.
  
http://eavesdrop.openstack.org/meetings/horizon/2014/horizon.2014-07-22-16.00.log.html#l-140

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348097] [NEW] Metadata agent fails with RequestURITooLong

2014-07-24 Thread Ilya Shakhat
Public bug reported:

The issue is reproducible when project (tenant) contains more than 170
networks and all these networks plugged into a single router.

Steps to reproduce on devstack:
 * disable quotas
 * create networks and subnets inside tenant, plug them into the router:
for i in $(seq 1 170);
do
  neutron net-create skynet_$i
  neutron subnet-create --name skysubnet_$i skynet_$i 13.0.$i.0/24
  neutron router-interface-add router1 skysubnet_$i
done
 * launch VM and plug into any network
 * from VM's VNC console do curl 
http://169.254.169.254/latest/meta-data/instance-id

Observed behavior:
 * the request fails with HTTP 500 error
 * stacktrace in logs of neutron metadata agent (q-meta screen):
RequestURITooLong: An unknown exception occurred.

Stacktrace:
2014-07-24 08:54:58.465 ERROR neutron.agent.metadata.agent [-] Unexpected error.
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent Traceback (most 
recent call last):
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 128, in __call__
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent instance_id, 
tenant_id = self._get_instance_and_tenant_id(req)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 192, in 
_get_instance_and_tenant_id
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent ports = 
self._get_ports(remote_address, network_id, router_id)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 183, in _get_ports
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent return 
self._get_ports_for_remote_address(remote_address, networks)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/common/utils.py, line 99, in __call__
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent return 
self._get_from_cache(target_self, *args, **kwargs)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/common/utils.py, line 77, in _get_from_cache
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent item = 
self.func(target_self, *args, **kwargs)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 163, in 
_get_ports_for_remote_address
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent return 
qclient.list_ports(
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 101, in 
with_params
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent ret = 
self.function(instance, *args, **kwargs)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 308, in 
list_ports
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent **_params)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1329, in 
list
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent for r in 
self._pagination(collection, path, **params):
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1342, in 
_pagination
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent res = 
self.get(path, params=params)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1315, in 
get
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent headers=headers, 
params=params)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1300, in 
retry_request
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent headers=headers, 
params=params)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1228, in 
do_request
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent 
self._check_uri_length(action)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1217, in 
_check_uri_length
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent excess=uri_len - 
self.MAX_URI_LEN)
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent RequestURITooLong: 
An unknown exception occurred.
2014-07-24 08:54:58.465 TRACE neutron.agent.metadata.agent

** Affects: neutron
 Importance: Undecided
 Assignee: Ilya Shakhat (shakhat)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Ilya Shakhat (shakhat)

-- 
You received this bug notification because you are a member of Yahoo!

[Yahoo-eng-team] [Bug 1348103] [NEW] nova to neutron port notification fails in cells environment

2014-07-24 Thread Liam Young
Public bug reported:

When deploying OpenStack Icehouse on Ubuntu trusty  in a cells configuration 
the callback from neutron to nova that notifies nova
when a port for an instance is ready to be used seems to be lost. This causes 
the spawning instance to go into an ERROR state and 
the following int the nova-compute.log:

Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1714, 
in _spawn
block_device_info)
  File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
2266, in spawn
block_device_info)
  File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3681, in _create_domain_and_network
raise exception.VirtualInterfaceCreateException()
VirtualInterfaceCreateException: Virtual Interface creation failed


Adding vif_plugging_is_fatal = False and vif_plugging_timeout = 5 to the 
compute nodes stops the missing message from being fatal and guests can then be 
spawned normally and accessed over the network.

This issue doesn't present itself when deploying in a non-cell
configuration.

I'll attatch logs from attempting to spawn a new guest (at about 07:52)
with:

nova  boot --image precise --flavor m1.small --key_name test --nic net-
id=b77ca278-6e00-4530-94fe-c946a6046acf server075238

where dc31c58f-e455-4a1a-b825-6777ccb8d3c1 is the resulting guest id

nova-cells 1:2014.1.1-0ubuntu1
nova-api-ec21:2014.1.1-0ubuntu1
nova-api-os-compute  1:2014.1.1-0ubuntu1
nova-cert  1:2014.1.1-0ubuntu1
nova-common   1:2014.1.1-0ubuntu1
nova-conductor   1:2014.1.1-0ubuntu1
nova-objectstore 1:2014.1.1-0ubuntu1
nova-scheduler 1:2014.1.1-0ubuntu1
neutron-common 1:2014.1.1-0ubuntu2
neutron-plugin-ml2 1:2014.1.1-0ubuntu2
neutron-server 1:2014.1.1-0ubuntu2
neutron-plugin-openvswitch-agent 1:2014.1.1-0ubuntu2
openvswitch-common  2.0.1+git20140120-0ubuntu2
openvswitch-switch  2.0.1+git20140120-0ubuntu2
neutron-plugin-ml2  1:2014.1.1-0ubuntu2

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348103

Title:
  nova to neutron port notification fails in cells environment

Status in OpenStack Compute (Nova):
  New

Bug description:
  When deploying OpenStack Icehouse on Ubuntu trusty  in a cells configuration 
the callback from neutron to nova that notifies nova
  when a port for an instance is ready to be used seems to be lost. This causes 
the spawning instance to go into an ERROR state and 
  the following int the nova-compute.log:

  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1714, 
in _spawn
  block_device_info)
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
2266, in spawn
  block_device_info)
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3681, in _create_domain_and_network
  raise exception.VirtualInterfaceCreateException()
  VirtualInterfaceCreateException: Virtual Interface creation failed

  
  Adding vif_plugging_is_fatal = False and vif_plugging_timeout = 5 to the 
compute nodes stops the missing message from being fatal and guests can then be 
spawned normally and accessed over the network.

  This issue doesn't present itself when deploying in a non-cell
  configuration.

  I'll attatch logs from attempting to spawn a new guest (at about
  07:52) with:

  nova  boot --image precise --flavor m1.small --key_name test --nic
  net-id=b77ca278-6e00-4530-94fe-c946a6046acf server075238

  where dc31c58f-e455-4a1a-b825-6777ccb8d3c1 is the resulting guest id

  nova-cells 1:2014.1.1-0ubuntu1
  nova-api-ec21:2014.1.1-0ubuntu1
  nova-api-os-compute  1:2014.1.1-0ubuntu1
  nova-cert  1:2014.1.1-0ubuntu1
  nova-common   1:2014.1.1-0ubuntu1
  nova-conductor   1:2014.1.1-0ubuntu1
  nova-objectstore 1:2014.1.1-0ubuntu1
  nova-scheduler 1:2014.1.1-0ubuntu1
  neutron-common 1:2014.1.1-0ubuntu2
  neutron-plugin-ml2 1:2014.1.1-0ubuntu2
  neutron-server 1:2014.1.1-0ubuntu2
  

[Yahoo-eng-team] [Bug 1172691] Re: LXC termination fails using LVM for root

2014-07-24 Thread Ray Chen
*** This bug is a duplicate of bug 1333827 ***
https://bugs.launchpad.net/bugs/1333827

** This bug has been marked a duplicate of bug 1333827
   Libvirt-LXC can leave image mounted to host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1172691

Title:
  LXC termination fails using LVM for root

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Hello,

  Terminating an instance running on LXC with LVM as instance root
  storage fails with an exception.

  The LVM volume is wiped using dd, but it is not disconnected from
  qemu-nbd before the LVM volume gets removed.

  The instance in itself on the compute node is stopped but still marked as 
Active, Deleting in nova. qemu-nbd is sill running.
  After first failure, disconnecting the qemu-nbd process manually and 
relaunching the instance termination works

  Log file in debug attached

  Environment:
  Ubuntu 12.04
  OpenStack grizzly

  Configuration for LVM:
  libvirt_images_type=lvm
  libvirt_images_volume_group=nova_local

  Error: (The full stack can be find in attached log)
  2013-04-25 12:02:59.119 10796 TRACE nova.openstack.common.rpc.amqp 
ProcessExecutionError: Unexpected error while running command.
  2013-04-25 12:02:59.119 10796 TRACE nova.openstack.common.rpc.amqp Command: 
sudo nova-rootwrap /etc/nova/rootwrap.conf lvremove -f 
/dev/nova_local/instance-0004_disk
  2013-04-25 12:02:59.119 10796 TRACE nova.openstack.common.rpc.amqp Exit code: 
5
  2013-04-25 12:02:59.119 10796 TRACE nova.openstack.common.rpc.amqp Stdout: ''
  2013-04-25 12:02:59.119 10796 TRACE nova.openstack.common.rpc.amqp Stderr: '  
Can\'t remove open logical volume instance-0004_disk\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1172691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348115] [NEW] Cannot publish errors to ceilometer

2014-07-24 Thread Li Ma
Public bug reported:

I tried to test sending 'notification-errors' to ceilometer to report
ERROR log in neutron, but I failed with RuntimeError:

RuntimeError: maximum recursion depth exceeded

It seems that the ERROR message causes infinite loop on notifying to
ceilometer.

The traceback is linked at: http://paste.openstack.org/show/87907/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348115

Title:
  Cannot publish errors to ceilometer

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I tried to test sending 'notification-errors' to ceilometer to report
  ERROR log in neutron, but I failed with RuntimeError:

  RuntimeError: maximum recursion depth exceeded

  It seems that the ERROR message causes infinite loop on notifying to
  ceilometer.

  The traceback is linked at: http://paste.openstack.org/show/87907/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2014-07-24 Thread Bogdan Dobrelya
** Summary changed:

- [mos] Openstack services should support SIGHUP signal
+ Openstack services should support SIGHUP signal

** No longer affects: fuel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in Orchestration API (Heat):
  Confirmed
Status in OpenStack Identity (Keystone):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
python-path/site-packages/foo-service-name/openstack/common/service.py
  def _sighup_supported():
  +LOG.warning(SIGHUP is supported: {0}.format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for SIGHUP is 
supported, if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP foo-service-pid and check logs for SIGHUP is 
supported and Caught SIGHUP, if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  132Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  134Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  134Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid  echo 
$pid
  16512
  134Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  132Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  134Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP foo-service-pid command 
was issued, there would be a Caught SIGHUP message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348128] [NEW] Zone Manager throws exception for undefined '_'

2014-07-24 Thread Jeegn Chen
Public bug reported:

Zone Manager throws exception for undefined '_' as follows
(zoning_mode=fabric set in cinder.conf for FC).

2014-07-24 00:48:35.120 224449 ERROR oslo.messaging.rpc.dispatcher 
[req-d68e568f-d8af-466c-9754-a3217fbb912b 9892262ce133464f96a192b6c655bdfa 
589ac6f777544115b7ede70619558b2e - - -] Exception during message handling: Bad 
or unexpected response from the storage volume backend API: Unable to fetch 
connection information from backend: global name '_' is not defined
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/cinder/cinder/volume/manager.py, line 815, in initialize_connection
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher raise 
exception.VolumeBackendAPIException(data=err_msg)
2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher 
VolumeBackendAPIException: Bad or unexpected response from the storage volume 
backend API: Unable to fetch connection information from backend: global name 
'_' is not defined


It seems that the change in I18N removed the implicit import of '_' and the 
following statement need be added explicitly to cinder/zonemanager/utils.py
from cinder.openstack.common.gettextutils import _

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348128

Title:
  Zone Manager throws exception for undefined '_'

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Zone Manager throws exception for undefined '_' as follows
  (zoning_mode=fabric set in cinder.conf for FC).

  2014-07-24 00:48:35.120 224449 ERROR oslo.messaging.rpc.dispatcher 
[req-d68e568f-d8af-466c-9754-a3217fbb912b 9892262ce133464f96a192b6c655bdfa 
589ac6f777544115b7ede70619558b2e - - -] Exception during message handling: Bad 
or unexpected response from the storage volume backend API: Unable to fetch 
connection information from backend: global name '_' is not defined
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/cinder/cinder/volume/manager.py, line 815, in initialize_connection
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher raise 
exception.VolumeBackendAPIException(data=err_msg)
  2014-07-24 00:48:35.120 224449 TRACE oslo.messaging.rpc.dispatcher 
VolumeBackendAPIException: Bad or unexpected response from the storage volume 
backend API: Unable to fetch connection information from backend: global name 
'_' is not defined

  
  It seems that the change in I18N removed the implicit import of '_' and the 
following statement need be added explicitly to cinder/zonemanager/utils.py
  from cinder.openstack.common.gettextutils import _

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348128/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1348138] [NEW] Migration set_length_of_description_field_metering does not work for Postgres older that 9.1.13

2014-07-24 Thread Ann Kamyshnikova
Public bug reported:

Migration set_length_of_description_field_metering fails on Postgres if
its version is less then 9.1.13. Error log
http://paste.openstack.org/show/87920/

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) = Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348138

Title:
  Migration set_length_of_description_field_metering does not work for
  Postgres older that 9.1.13

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Migration set_length_of_description_field_metering fails on Postgres
  if its version is less then 9.1.13. Error log
  http://paste.openstack.org/show/87920/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348143] [NEW] error when create a new user with its role is _member_

2014-07-24 Thread guomin.lizte
Public bug reported:

In dashboard panel  ,when I create a new user ,its role assigned to
_member_, on the top of right corner ,the dashboard displays that
Error: Unable to add user to primary project. Actually, the user is
created successfully.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1348143

Title:
  error when create a new user with its role is _member_

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In dashboard panel  ,when I create a new user ,its role assigned to
  _member_, on the top of right corner ,the dashboard displays that
  Error: Unable to add user to primary project. Actually, the user is
  created successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1348143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340970] Re: Excessive logging due to defaults being unset in tests

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1340970

Title:
  Excessive logging due to defaults being unset in tests

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Keystone logs from tests tend to be excessively large due to the default log 
levels not being set.
  This can occasionally cause logs to exceed the 50MB limit (infra) on a 
gate/check job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1340970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335437] Re: LDAP attributes mapped to None can cause 500 errors

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1335437

Title:
  LDAP attributes mapped to None can cause 500 errors

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  When LDAP is being used as a backend, attributes that are mapped to
  'None' will trigger a 500 error if they are not also configured to be
  ignored.   This can be easily reproduced by modifying the default
  config as follows:

  -
  # List of attributes stripped off the user on update. (list
  # value)
  #user_attribute_ignore=default_project_id,tenants
  user_attribute_ignore=tenants

  # LDAP attribute mapped to default_project_id for users.
  # (string value)
  #user_default_project_id_attribute=None
  -

  If you then perform a 'keystone user-list', it will trigger a 500
  error:

  -
  [root@keystone ~(keystone_admin)]# keystone user-list
  Authorization Failed: An unexpected error prevented the server from 
fulfilling your request. (HTTP 500)
  -

  The end of the stacktrace in keystone.log clearly shows the problem:

  -
  2014-06-28 06:23:36.366 21931 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/site-packages/keystone/common/ldap/core.py, line 502, in 
_ldap_res_to_model
  2014-06-28 06:23:36.366 21931 TRACE keystone.common.wsgi v = 
lower_res[self.attribute_mapping.get(k, k).lower()]
  2014-06-28 06:23:36.366 21931 TRACE keystone.common.wsgi AttributeError: 
'NoneType' object has no attribute 'lower'
  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1335437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336910] Re: oauth1 response content type is incorrect

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1336910

Title:
  oauth1 response content type is incorrect

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Python client library for Keystone:
  New

Bug description:
  OAuth1 response type is incorrectly being labelled as json, when it
  should be urlencoded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1336910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334368] Re: HEAD and GET inconsistencies in Keystone

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1334368

Title:
  HEAD and GET inconsistencies in Keystone

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Committed
Status in Tempest:
  Fix Released

Bug description:
  While trying to convert Keystone to gate/check under mod_wsgi, it was
  noticed that occasionally a few HEAD calls were returning HTTP 200
  where under eventlet they consistently return HTTP 204.

  This is an inconsistency within Keystone. Based upon the RFC, HEAD
  should be identitcal to GET except that there is no body returned.
  Apache + MOD_WSGI in some cases converts a HEAD request to a GET
  request to the back-end wsgi application to avoid issues where the
  headers cannot be built to be sent as part of the response (this can
  occur when no content is returned from the wsgi app).

  This situation shows that Keystone should likely never build specific
  HEAD request methods and have HEAD simply call to the controller GET
  handler, the wsgi-layer should then simply remove the response body.

  This will help to simplify Keystone's code as well as mkae the API
  responses more consistent.

  Example Error in Gate:

  2014-06-25 05:20:37.820 | 
tempest.api.identity.admin.v3.test_trusts.TrustsV3TestJSON.test_trust_expire[gate,smoke]
  2014-06-25 05:20:37.820 | 

  2014-06-25 05:20:37.820 | 
  2014-06-25 05:20:37.820 | Captured traceback:
  2014-06-25 05:20:37.820 | ~~~
  2014-06-25 05:20:37.820 | Traceback (most recent call last):
  2014-06-25 05:20:37.820 |   File 
tempest/api/identity/admin/v3/test_trusts.py, line 241, in test_trust_expire
  2014-06-25 05:20:37.820 | self.check_trust_roles()
  2014-06-25 05:20:37.820 |   File 
tempest/api/identity/admin/v3/test_trusts.py, line 173, in check_trust_roles
  2014-06-25 05:20:37.821 | self.assertEqual('204', resp['status'])
  2014-06-25 05:20:37.821 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 321, in 
assertEqual
  2014-06-25 05:20:37.821 | self.assertThat(observed, matcher, message)
  2014-06-25 05:20:37.821 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
  2014-06-25 05:20:37.821 | raise mismatch_error
  2014-06-25 05:20:37.821 | MismatchError: '204' != '200'

  
  This is likely going to require changes to Keystone, Keystoneclient, Tempest, 
and possibly services that consume data from keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1334368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335278] Re: compute_port in config options

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1335278

Title:
  compute_port in config options

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  In ancient times keystone replaced the port for the compute service
  based upon it's local configuration file (templated catalog). This is
  silly and should not be done as it means you would need to configure
  the compute_port variable in keystone for it to reflect the catalog
  instead of updating the static data.

  The keystone config should have no bearing on the nova port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1335278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332831] Re: order of user list appears inconsistent

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1332831

Title:
  order of user list appears inconsistent

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  This appeared as a transient failure in a doc change. I suspect the
  test shouldn't bother asserting the order of the results, only that
  the expected values appear in the list.

  ==
  FAIL: 
keystone.tests.test_v2_controller.TenantTestCase.test_get_project_users_no_user
  tags: worker-1
  --
  Empty attachments:
    pythonlogging:''-1
    stderr
    stdout

  pythonlogging:'': {{{
  Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
  KVS region configuration for token-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
  Using default dogpile sha1_mangle_key as KVS region token-driver key_mangler
  It is recommended to only use the base key-value-store implementation for the 
token driver for testing purposes.  Please use 
keystone.token.backends.memcache.Token or keystone.token.backends.sql.Token 
instead.
  KVS region configuration for os-revoke-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
  Using default dogpile sha1_mangle_key as KVS region os-revoke-driver 
key_mangler
  Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed 
to event `identity.OS-TRUST:trust.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` 
subscribed to event `identity.OS-OAUTH1:consumer.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.disabled`.
  Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.disabled`.
  Callback: `keystone.contrib.revoke.core.Manager._domain_callback` subscribed 
to event `identity.domain.disabled`.
  found extension EntryPoint.parse('qpid = 
oslo.messaging._drivers.impl_qpid:QpidDriver')
  found extension EntryPoint.parse('zmq = 
oslo.messaging._drivers.impl_zmq:ZmqDriver')
  found extension EntryPoint.parse('kombu = 
oslo.messaging._drivers.impl_rabbit:RabbitDriver')
  found extension EntryPoint.parse('rabbit = 
oslo.messaging._drivers.impl_rabbit:RabbitDriver')
  found extension EntryPoint.parse('fake = 
oslo.messaging._drivers.impl_fake:FakeDriver')
  found extension EntryPoint.parse('log = 
oslo.messaging.notify._impl_log:LogDriver')
  found extension EntryPoint.parse('messagingv2 = 
oslo.messaging.notify._impl_messaging:MessagingV2Driver')
  found extension EntryPoint.parse('noop = 
oslo.messaging.notify._impl_noop:NoOpDriver')
  found extension EntryPoint.parse('routing = 
oslo.messaging.notify._impl_routing:RoutingDriver')
  found extension EntryPoint.parse('test = 
oslo.messaging.notify._impl_test:TestDriver')
  found extension EntryPoint.parse('messaging = 
oslo.messaging.notify._impl_messaging:MessagingDriver')
  User 70bb7abd662a42c4b906cfc16c907fcf in project bar doesn't exist.
  }}}

  Traceback (most recent call last):
    File keystone/tests/test_v2_controller.py, line 61, in 
test_get_project_users_no_user
  self.assertEqual(orig_project_users, new_project_users)
    File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'users': [{'email': 'f...@bar.com',
  'enabled': True,
  'id': 'd00764bbd27f478c8321af4fcd1428fb',
  'name': 'FOO',
  'username': 'FOO'},
     {'email': 's...@snl.coom',
  'enabled': True,
  'id': 'ee5f3d2c210e481198f68b0b53518838',
  'name': 'SNA',
  'username': 'SNA'}]}
  actual= {'users': [{'email': 

[Yahoo-eng-team] [Bug 1331912] Re: [OSSA 2014-022] V2 Trusts allow trustee to emulate trustor in other projects (CVE-2014-3520)

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1331912

Title:
  [OSSA 2014-022] V2 Trusts allow trustee to emulate trustor in other
  projects (CVE-2014-3520)

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When you consume a trust in a v2 token you must provide the project id
  as part of your auth. This is a bug and should be reported after this.

  If the trustee requests a trust scoped token to a project different to
  the one the trust is created for AND the trustor has the required
  roles in the other project then the token will be provided with those
  roles on the other project.

  Attaching a script to show the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1331912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334466] Re: Eventlet Log Spamming on Client Disconnect (Broken Pipe)

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1334466

Title:
  Eventlet Log Spamming on Client Disconnect (Broken Pipe)

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  In Progress

Bug description:
  If a client makes a request to keystone, and then disconnects before
  keystone responds, it is possible to fill up the logs (INFO) with
  eventlet tracebacks due to broken pipe:

  2014-06-24 23:30:29.729 31440 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[24/Jun/2014 23:30:29] POST /v3/auth/tokens HTTP/1.1 201 0 100.313719
  2014-06-24 23:30:29.731 31440 INFO eventlet.wsgi.server [-] Traceback (most 
recent call last):
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 399, in 
handle_one_response
  write(''.join(towrite))
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 349, in write
  _writelines(towrite)
File /usr/lib/python2.7/socket.py, line 334, in writelines
  self.flush()
File /usr/lib/python2.7/socket.py, line 303, in flush
  self._sock.sendall(view[write_offset:write_offset+buffer_size])
File /usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 307, in 
sendall
  tail = self.send(data, flags)
File /usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 293, in 
send
  total_sent += fd.send(data[total_sent:], flags)
  error: [Errno 32] Broken pipe

  Example (900k line file) due to this [WARNING THIS LINK MIGHT KILL YOUR 
BROWSER]:
  
http://logs.openstack.org/66/99766/2/check/check-grenade-dsvm/9fd33e1/logs/old/screen-key.txt.gz?level=INFO

  We should override the required HTTPProtocol class and gracefully
  handle the traceback. If we would like to keep the information, a
  single log-line per incident would be sufficient instead of ~14.

  This should be considered for a backport to Icehouse to help limit log
  spam there as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1334466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334779] Re: db_sync breaks in non-utf8 databases on region table

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1334779

Title:
  db_sync breaks in non-utf8 databases on region table

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  In Progress

Bug description:
  The migration that creates the region table does not explicitly set
  utf8 so if the database default is not set, then db_sync fails with
  the following error:

  2014-06-26 17:00:48.231 965 CRITICAL keystone [-] ValueError: Tables region 
have non utf8 collation, please make sure all tables are CHARSET=utf8
  2014-06-26 17:00:48.231 965 TRACE keystone Traceback (most recent call last):
  2014-06-26 17:00:48.231 965 TRACE keystone File /usr/bin/keystone-manage, 
line 51, in module
  2014-06-26 17:00:48.231 965 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
/usr/lib/python2.7/dist-packages/keystone/cli.py, line 191, in main
  2014-06-26 17:00:48.231 965 TRACE keystone CONF.command.cmd_class.main()
  2014-06-26 17:00:48.231 965 TRACE keystone File 
/usr/lib/python2.7/dist-packages/keystone/cli.py, line 67, in main
  2014-06-26 17:00:48.231 965 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
/usr/lib/python2.7/dist-packages/keystone/common/sql/migration_helpers.py, 
line 139, in sync_database_to_version
  2014-06-26 17:00:48.231 965 TRACE keystone 
migration.db_sync(sql.get_engine(), abs_path, version=version)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
/usr/lib/python2.7/dist-packages/keystone/openstack/common/db/sqlalchemy/migration.py,
 line 195, in db_sync
  2014-06-26 17:00:48.231 965 TRACE keystone _db_schema_sanity_check(engine)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
/usr/lib/python2.7/dist-packages/keystone/openstack/common/db/sqlalchemy/migration.py,
 line 228, in _db_schema_sanity_check
  2014-06-26 17:00:48.231 965 TRACE keystone ) % ','.join(table_names))
  2014-06-26 17:00:48.231 965 TRACE keystone ValueError: Tables region have 
non utf8 collation, please make sure all tables are CHARSET=utf8
  2014-06-26 17:00:48.231 965 TRACE keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1334779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324592] Re: [OSSA 2014-018] Trust scope can be circumvented by chaining trusts (CVE-2014-3476)

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1324592

Title:
  [OSSA 2014-018] Trust scope can be circumvented by chaining trusts
  (CVE-2014-3476)

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  I've been experimenting with chaining keystone trusts, and I've
  encountered what I think is a privilege escalation flaw, where the
  scope enforced by the trust when initially delegating can be
  circumvented by creating another trust.

  I spoke about this briefly with ayoung on IRC and he seems to be in
  agreement that this is a bug.

  Details:

  1. User1 has roles admin and heat_stack_owner
  2. User1 delegates to User2 via a trust, only delegating only 
heat_stack_owner, and enabling impersonation
  3. User2 gets a trust-scoped token, impersonating User1
  4. User2 creates a new trust, delegating both admin and heat_stack_owner to 
User3
  5. This works, and so when User3 gets a trust scoped token, they can get 
elevated privileleges, effectively defeating the point of role-limited 
delegation via the trust.

  I've attached a reproducer which demonstrates the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1324592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328201] Re: Cannot fetch Certs with Compressed token provider

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328201

Title:
  Cannot fetch Certs with Compressed token provider

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  The simple_cert extension has an explicit  check that the Token
  provider is the PKIToken provider, and returns nothing otherwise. That
  check prevents fetching certificates if the Token provider is not the
  PKI provider.  PKIZ tokens also do signing, but also compression, and
  need the certificates available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311142] Re: Cache records for get_*_by_name are not invalidated on entity rename

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1311142

Title:
  Cache records for get_*_by_name are not invalidated on entity rename

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  In Progress
Status in Keystone icehouse series:
  Fix Committed

Bug description:
  I have noticed in keystone code, that update_domain and update_project
  methods in assignment_api Manager invalidate cache for get_*_by_name()
  using new name, not the old one.

  For example in update_domain() if you are changing domain name from
  'OldName' to 'NewName', get_domain_by_name.invalidate() is called with
  'NewName' as argument. See:

  
https://github.com/openstack/keystone/blob/1e948043fe2456bd91b398317c71c665d69e9935/keystone/assignment/core.py#L320

  As a result the old name can be used in some requests until cache
  record is expired. For example if you rename a domain, old name can
  still be used for the authentication (note, caching should be enabled
  in keystone configuration):

  1. Define domain by its name during login:
  curl -X POST -H 'Content-type: application/json' -d 
'{auth:{identity:{methods:[password], 
password:{user:{name:Alice,domain:{name: OldName}, 
password:A12345678}' -v http://192.168.56.101:5000/v3/auth/tokens

  2. Change domain name:
  curl -X PATCH -H 'Content-type: application/json' -H 'X-Auth-Token: 
indigitus' -d '{domain:{name:NewName}}' 
http://192.168.56.101:5000/v3/domains/7e0629d4e31b4c5591a4a10d0b8931df

  3. Login using old domain name (copy command from step 1).

  As a result Alice will be logged in, even though domain name specified
  is not available anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1311142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324260] Re: Always migrate the the db for extensions instead of conditionally

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1324260

Title:
  Always migrate the the db for extensions instead of conditionally

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Following the discussion here 
https://review.openstack.org/#/c/95778/6/lib/keystone 
  (Adding an env_var to enable keystone extensions in devstack)

  Morgan Fainberg proposed that we should _always_ migrate the db for
  extensions instead of conditionally. there is no reason not to have
  the DB structure in place (notably to ensure consistent schemas and
  better deployer experience in enabling an extension).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1324260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283943] Re: Update keystone docs

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1283943

Title:
  Update keystone docs

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  I noticed a few spots in the docs that are out of date, I'm going to
  list them all here, some might not be necessary.

  1) (FIXED Mar 9, 2014) Mention that ADMIN_TOKEN does not carry any
  authorization (we say this in sample.conf)
  http://docs.openstack.org/developer/keystone/configuringservices.html
  #admin-token

  2) (DUPLICATE) Tenant is used:
  http://docs.openstack.org/developer/keystone/configuringservices.html
  #setting-up-tenants-users-and-roles

  3) (DUPLICATE) Tenant again:
  http://docs.openstack.org/developer/keystone/configuringservices.html
  #creating-service-users

  4) (FIXED Jun 20, 2014) Should we mention #openstack-keystone: http
  ://docs-draft.openstack.org/00/73900/4/check/gate-keystone-
  docs/98c168c/doc/build/html/community.html#openstack-on-freenode-irc-
  network

  5) (FIXED Oct 21, 2013) This seems like a good spot to mention
  extensions, since they are not really mentioned anywhere (except
  developing extensions):
  http://docs.openstack.org/developer/keystone/architecture.html

  6) (FIXED May 30, 2014) Can we condense Service API Examples Using
  Curl and Admin API Examples Using Curl, as they go to the same url:
  http://docs.openstack.org/developer/keystone/#developers-documentation

  7) (FIXED May 30, 2014) This contains a lot of V2 specific content: 
http://docs.openstack.org/developer/keystone/api_curl_examples.html (tenant is 
used, /tokens instead of /auth/tokens, token response contains 'access'.
  Not sure if we should keep two copies, or update it all to v3.

  8) (INVALID) No mention of disable events in:
  http://docs.openstack.org/developer/keystone/event_notifications.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1283943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275693] Re: Wrong oauth1_extension used instead of oauth_extension in documentation

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1275693

Title:
  Wrong oauth1_extension used instead of oauth_extension in
  documentation

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  The documentation in doc/source/extensions/oauth1-configuration.rst
  states that oauth1_extension should be used. However, in the paste
  configuration file and in the tests, oauth_extension is be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1275693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313837] Re: unnecessary period in logs make searching/copy/paste annoying

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1313837

Title:
  unnecessary period in logs make searching/copy/paste annoying

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  I spent some time today trying to debug why neutron was seeing some
  token failures. The keystone logs unnecessarily add a period to the
  end of many of these messages which makes the copy and paste to lookup
  these tokens more work than it should be.

  For example:

  2014-04-28 16:16:34.225 5037 WARNING keystone.common.wsgi [-] Could
  not find token, 377c4c9a571a4b5ca64d56fe0aaa29c3.

  When one double clicks that token id and tries to paste it the period
  is also picked up. When used in a script some string manipulation is
  needed to remove it, unnecessary work for the debugger.

  On a grammatical note, as any English teacher would tell you, a
  sentence, which ends with a period, needs a subject and a verb. That
  statement above lacks a subject, although I suppose one is implied.
  Anyway, it's not a complete sentence and therefore the period is
  unnecessary and invalid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1313837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291366] Re: documentation should advice against using pki_setup and ssl_setup

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291366

Title:
  documentation should advice against using pki_setup and ssl_setup

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Both of these tools generate  Self-signed CA certificates.  As such,
  they are only appropriate for development deployments, and should be
  treated as such.  While sites with mature PKI policies would recognize
  this, that majority of people new to Open Stack are not PKI experts,
  and are using the provided tools.  The
  http://docs.openstack.org/developer/keystone/configuration.html
  #certificates-for-pki  should state this clearly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1205506] Re: get_group_project_roles() asks same ldap query for all groups associated with user

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1205506

Title:
  get_group_project_roles() asks same ldap query for all groups
  associated with user

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  in assignment/core.py:_get_group_project_roles() iterators over all my
  ldap user groups and calls self._get_metadata(group_id=x['id'],
  tenant_id=project_ref['id'])

  in assignment/backends/ldap.py:_get_metadata() has a parameter
  group_id but it not used in the function.

  this effectively calls ldap for every group with the identical question: 
  2013-07-26 21:50:32,026 (keystone.common.ldap.core): DEBUG core search_s 
LDAP search: dn=cn=groups,dc=bogus,dc=com, scope=1, 
query=((cn=OS_TENANT_NAME)(objectClass=posixGroup)), attrs=['enabled', 'cn', 
'businessCategory', 'description']

  where OS_TENANT_NAME is shell environment variable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1205506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226171] Re: When using per-domain-identity backend, user_ids could collide

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226171

Title:
  When using per-domain-identity backend, user_ids could collide

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When using the per-domain-identity backend usernames could end up
  colliding when multiple LDAP backends are used since we extract very
  limited information from the DN.

  Example

  cn=example user, dc=example1,dc=com
  cn=example user, dc=example2,dc=com

  Would net the same user_id of example user

  This can also affect groups in the same manner.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1226171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316657] Re: 500 error in case request body is a valid json object, but not a dict

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1316657

Title:
  500 error in case request body is a valid json object, but not a dict

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Any request to Keystone API, which contains a body, causes 500 error
  in case this body is a valid JSON object, but not a dictionary.

  For example, the next request:

  curl  -HX-Auth-Token:ADMIN -H Content-type: application/json
  http://localhost:5000/v3/users -X GET -d 42

  produces the next response:

  HTTP/1.1 500 Internal Server Error
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 185
  Date: Tue, 06 May 2014 15:16:29 GMT

  {error: {message: An unexpected error prevented the server
  from fulfilling your request. 'int' object has no attribute
  'iteritems', code: 500, title: Internal Server Error}}

  and causes the next error message in the log:

  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi Traceback (most recent 
call last):
  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/common/wsgi.py, line 387, in __call__
  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi response = 
self.process_request(request)
  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/middleware/core.py, line 135, in process_request
  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi for k, v in 
six.iteritems(params_parsed):
  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi   File 
/usr/local/lib/python2.7/dist-packages/six.py, line 498, in iteritems
  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi return iter(getattr(d, 
_iteritems)(**kw))
  2014-05-06 11:16:29.388 TRACE keystone.common.wsgi AttributeError: 'int' 
object has no attribute 'iteritems'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1316657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1175904] Re: passlib trunc_password MAX_PASSWORD_LENGTH password truncation

2014-07-24 Thread Russell Bryant
** Changed in: keystone
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1175904

Title:
  passlib trunc_password MAX_PASSWORD_LENGTH password truncation

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Grant Murphy originally reported:

  * Insecure / bad practice

 The trunc_password function attempts to correct and truncate passwords 
 that are over the MAX_PASSWORD_LENGTH value (default 4096). As the 
 MAX_PASSWORD_LENGTH field is globally mutable it could be modified 
 to restrict all passwords to length = 1. This scenario might be unlikely 
 but generally speaking we should not try to 'fix' invalid input and 
 continue on processing as if nothing happened. 

  If this is exploitable it will need a CVE, if not we should still
  harden it so it can't be monkeyed with in the future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1175904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327473] Re: Don't use mutables as default args

2014-07-24 Thread Russell Bryant
** Changed in: cinder
   Status: Fix Committed = Fix Released

** Changed in: cinder
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327473

Title:
  Don't use mutables as default args

Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for heat:
  In Progress

Bug description:
  
  Passing mutable objects as default args is a known Python pitfall.
  We'd better avoid this.

  This is an  example show the pitfall:
  http://docs.python-guide.org/en/latest/writing/gotchas/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1327473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277104] Re: wrong order of assertEquals args

2014-07-24 Thread Russell Bryant
** Changed in: ceilometer
   Status: Fix Committed = Fix Released

** Changed in: ceilometer
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1277104

Title:
  wrong order of assertEquals args

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Triaged
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Glance:
  In Progress
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Nova:
  Triaged
Status in OpenStack Command Line Client:
  In Progress
Status in Python client library for Swift:
  In Progress

Bug description:
  Args of assertEquals method in ceilometer.tests are arranged in wrong order. 
In result when test fails it shows incorrect information about observed and 
actual data. It's found more than 2000 times.
  Right order of arguments is expected, actual.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348178] [NEW] test_list_security_groups_list_all_tenants_filter

2014-07-24 Thread Derek Higgins
Public bug reported:

Error during grenade in the gate

http://logs.openstack.org/33/109033/1/gate/gate-grenade-dsvm-partial-
ncpu/00379f7/logs/grenade.sh.txt.gz

2014-07-24 02:47:56.532 | 
tempest.api.compute.security_groups.test_security_groups.SecurityGroupsTestJSON.test_server_security_groups[gate,network,smoke]
18.994
2014-07-24 02:47:56.532 | 
2014-07-24 02:47:56.532 | ==
2014-07-24 02:47:56.532 | Failed 1 tests - output below:
2014-07-24 02:47:56.532 | ==
2014-07-24 02:47:56.533 | 
2014-07-24 02:47:56.533 | 
tempest.api.compute.admin.test_security_groups.SecurityGroupsTestAdminXML.test_list_security_groups_list_all_tenants_filter[gate,network,smoke]
2014-07-24 02:47:56.533 | 
---
2014-07-24 02:47:56.533 | 
2014-07-24 02:47:56.533 | Captured traceback:
2014-07-24 02:47:56.533 | ~~~
2014-07-24 02:47:56.533 | Traceback (most recent call last):
2014-07-24 02:47:56.533 |   File tempest/test.py, line 128, in wrapper
2014-07-24 02:47:56.533 | return f(self, *func_args, **func_kwargs)
2014-07-24 02:47:56.533 |   File 
tempest/api/compute/admin/test_security_groups.py, line 69, in 
test_list_security_groups_list_all_tenants_filter
2014-07-24 02:47:56.533 | description))
2014-07-24 02:47:56.533 |   File 
tempest/services/compute/xml/security_groups_client.py, line 73, in 
create_security_group
2014-07-24 02:47:56.533 | str(xml_utils.Document(security_group)))
2014-07-24 02:47:56.533 |   File tempest/common/rest_client.py, line 218, 
in post
2014-07-24 02:47:56.533 | return self.request('POST', url, 
extra_headers, headers, body)
2014-07-24 02:47:56.533 |   File tempest/common/rest_client.py, line 430, 
in request
2014-07-24 02:47:56.533 | resp, resp_body)
2014-07-24 02:47:56.533 |   File tempest/common/rest_client.py, line 526, 
in _error_checker
2014-07-24 02:47:56.533 | raise exceptions.ServerFault(message)
2014-07-24 02:47:56.533 | ServerFault: Got server fault
2014-07-24 02:47:56.533 | Details: The server has either erred or is 
incapable of performing the requested operation.
2014-07-24 02:47:56.533 | 
2014-07-24 02:47:56.534 | 
2014-07-24 02:47:56.534 | Captured pythonlogging:
2014-07-24 02:47:56.534 | ~~~
2014-07-24 02:47:56.534 | 2014-07-24 02:41:09,715 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST http://127.0.0.1:5000/v2.0/tokens
2014-07-24 02:47:56.534 | 2014-07-24 02:41:09,791 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups 
0.075s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:09,916 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups 
0.124s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,163 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST http://127.0.0.1:5000/v2.0/tokens
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,268 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST 
http://127.0.0.1:8774/v2/6ef984821ac44a64b42a9f8a30e6a08b/os-security-groups 
0.104s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,348 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
500 POST 
http://127.0.0.1:8774/v2/6ef984821ac44a64b42a9f8a30e6a08b/os-security-groups 
0.079s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,441 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2/6ef984821ac44a64b42a9f8a30e6a08b/os-security-groups/6 
0.089s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,568 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups/4 
0.126s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,847 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups/3 
0.277s
2014-07-24 02:47:56.534 | 
2014-07-24 02:47:56.534 | 
2014-07-24 02:47:56.534 |

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this 

[Yahoo-eng-team] [Bug 1348199] [NEW] resizing always involves migration

2014-07-24 Thread xhzhf
Public bug reported:

Current, resizing a vm leads to migration. In fact, some hypersivor
vendor support resize original vm.  So wo should support resizing a vm
directly.

** Affects: nova
 Importance: Undecided
 Assignee: xhzhf (guoyongxhzhf)
 Status: Confirmed


** Tags: compute nova resize

** Changed in: nova
 Assignee: (unassigned) = xhzhf (guoyongxhzhf)

** Changed in: nova
   Status: New = Confirmed

** Tags added: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348199

Title:
  resizing always involves migration

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Current, resizing a vm leads to migration. In fact, some hypersivor
  vendor support resize original vm.  So wo should support resizing a vm
  directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348206] [NEW] synchronization of power_state stop vm incorrently

2014-07-24 Thread xhzhf
Public bug reported:

When vm in the hypeviisor core down, nova-compute modify power_state and
vm_state. Maybe administrator fix the hypervisor problem, and vm start
correctly. But synchronization of power_state will stop vm when finding
vm is running and vm_state is stopped.

In my opinion, the synchronization can keep user from enjoying service
when user does not pay fee. However, it cause vm stopped incorrently.

** Affects: nova
 Importance: Undecided
 Assignee: xhzhf (guoyongxhzhf)
 Status: Confirmed


** Tags: compute nova

** Changed in: nova
 Assignee: (unassigned) = xhzhf (guoyongxhzhf)

** Changed in: nova
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348206

Title:
  synchronization of power_state stop vm incorrently

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When vm in the hypeviisor core down, nova-compute modify power_state
  and vm_state. Maybe administrator fix the hypervisor problem, and vm
  start correctly. But synchronization of power_state will stop vm when
  finding vm is running and vm_state is stopped.

  In my opinion, the synchronization can keep user from enjoying service
  when user does not pay fee. However, it cause vm stopped incorrently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348232] [NEW] Integration tests - Pageobjects location should match horizon's structure

2014-07-24 Thread Daniel Korn
Public bug reported:

The directories structure of the Pageobjects, used for Horizon
integration tests, should match the dashboard's organization.

Example

openstack_dashboard/test/integration_tests/pages/

openstack_dashboard/test/integration_tests/pages/admin/
openstack_dashboard/test/integration_tests/pages/admin/identity/
openstack_dashboard/test/integration_tests/pages/admin/system/

openstack_dashboard/test/integration_tests/pages/project/
openstack_dashboard/test/integration_tests/pages/project/compute/
openstack_dashboard/test/integration_tests/pages/project/orchestration/

* also, there should be directories for pages containing subpages (i.e.
/access_and_security/ directory for keypairpage, securitygroupspage
and floatingippage)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: integration-tests pageobjects-directories-structure test

** Description changed:

  The directories structure of the Pageobjects, used for Horizon
  integration tests, should match the dashboard's organization.
  
  Example
  
  openstack_dashboard/test/integration_tests/pages/
  
  openstack_dashboard/test/integration_tests/pages/admin/
  openstack_dashboard/test/integration_tests/pages/admin/identity/
  openstack_dashboard/test/integration_tests/pages/admin/system/
  
  openstack_dashboard/test/integration_tests/pages/project/
  openstack_dashboard/test/integration_tests/pages/project/compute/
  openstack_dashboard/test/integration_tests/pages/project/orchestration/
  
- 
- * also, there should be directories for pages containing subpage (i.e. 
/access_and_security/ directory for keypairpage, securitygroupspage and 
floatingippage)
+ * also, there should be directories for pages containing subpages (i.e.
+ /access_and_security/ directory for keypairpage, securitygroupspage
+ and floatingippage)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348232

Title:
  Integration tests - Pageobjects location should match horizon's
  structure

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The directories structure of the Pageobjects, used for Horizon
  integration tests, should match the dashboard's organization.

  Example
  
  openstack_dashboard/test/integration_tests/pages/

  openstack_dashboard/test/integration_tests/pages/admin/
  openstack_dashboard/test/integration_tests/pages/admin/identity/
  openstack_dashboard/test/integration_tests/pages/admin/system/

  openstack_dashboard/test/integration_tests/pages/project/
  openstack_dashboard/test/integration_tests/pages/project/compute/
  openstack_dashboard/test/integration_tests/pages/project/orchestration/

  * also, there should be directories for pages containing subpages
  (i.e. /access_and_security/ directory for keypairpage,
  securitygroupspage and floatingippage)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348244] [NEW] debug log messages need to be unicode

2014-07-24 Thread James Carey
Public bug reported:

Debug logs should be:
  
LOG.debug(message)  should be LOG.debug(umessage)

Before the translation of debug log messages was removed, the
translation was returning unicode.   Now that they are no longer
translated they need to be explicitly marked as unicode.

This was confirmed by discussion with dhellman.   See
2014-07-23T13:48:23 in this log http://eavesdrop.openstack.org/irclogs
/%23openstack-oslo/%23openstack-oslo.2014-07-23.log

The problem was discovered when an exception was used as replacement
text in a debug log message:

   LOG.debug(Failed to mount image %(ex)s), {'ex': e})

In particular it was discovered as part of enabling lazy translation,
where the exception message is replaced with an object that does not
support str().   Note that this would also fail without lazy enabled, if
a translation for the exception message was provided that was unicode.


Example trace: 

 Traceback (most recent call last):
  File nova/tests/virt/disk/test_api.py, line 78, in 
test_can_resize_need_fs_type_specified
self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True))
  File nova/virt/disk/api.py, line 208, in is_image_partitionless
fs.setup()
  File nova/virt/disk/vfs/localfs.py, line 80, in setup
LOG.debug(Failed to mount image %(ex)s), {'ex': e})
  File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug
self.logger.debug(msg, *args, **kwargs)
  File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug
self._log(DEBUG, msg, args, **kwargs)
  File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log
self.handle(record)
  File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle
self.callHandlers(record)
  File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers
hdlr.handle(record)
  File nova/test.py, line 212, in handle
self.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 723, in format
return fmt.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 464, in format
record.message = record.getMessage()
  File /usr/lib/python2.7/logging/__init__.py, line 328, in getMessage
msg = msg % self.args
  File 
/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo/i18n/_message.py,
 line 167, in __str__
raise UnicodeError(msg)
UnicodeError: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead.
==
FAIL: nova.tests.virt.disk.test_api.APITestCase.test_resize2fs_e2fsck_fails
tags: worker-3

** Affects: nova
 Importance: Undecided
 Assignee: James Carey (jecarey)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = James Carey (jecarey)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348244

Title:
  debug log messages need to be unicode

Status in OpenStack Compute (Nova):
  New

Bug description:
  Debug logs should be:

  LOG.debug(message)  should be LOG.debug(umessage)

  Before the translation of debug log messages was removed, the
  translation was returning unicode.   Now that they are no longer
  translated they need to be explicitly marked as unicode.

  This was confirmed by discussion with dhellman.   See
  2014-07-23T13:48:23 in this log http://eavesdrop.openstack.org/irclogs
  /%23openstack-oslo/%23openstack-oslo.2014-07-23.log

  The problem was discovered when an exception was used as replacement
  text in a debug log message:

 LOG.debug(Failed to mount image %(ex)s), {'ex': e})

  In particular it was discovered as part of enabling lazy translation,
  where the exception message is replaced with an object that does not
  support str().   Note that this would also fail without lazy enabled,
  if a translation for the exception message was provided that was
  unicode.

  
  Example trace: 

   Traceback (most recent call last):
File nova/tests/virt/disk/test_api.py, line 78, in 
test_can_resize_need_fs_type_specified
  self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True))
File nova/virt/disk/api.py, line 208, in is_image_partitionless
  fs.setup()
File nova/virt/disk/vfs/localfs.py, line 80, in setup
  LOG.debug(Failed to mount image %(ex)s), {'ex': e})
File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug
  self.logger.debug(msg, *args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug
  self._log(DEBUG, msg, args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log
  self.handle(record)
File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle
  self.callHandlers(record)
File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers
  hdlr.handle(record)
File 

[Yahoo-eng-team] [Bug 1348244] Re: debug log messages need to be unicode

2014-07-24 Thread Jay Bryant
A hacking check will also need to be created to go with this to make
sure this issue doesn't creep in with future commits.

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: New = Confirmed

** Changed in: cinder
   Importance: Undecided = High

** Changed in: cinder
 Assignee: (unassigned) = Jay Bryant (jsbryant)

** Changed in: cinder
Milestone: None = juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348244

Title:
  debug log messages need to be unicode

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  Debug logs should be:

  LOG.debug(message)  should be LOG.debug(umessage)

  Before the translation of debug log messages was removed, the
  translation was returning unicode.   Now that they are no longer
  translated they need to be explicitly marked as unicode.

  This was confirmed by discussion with dhellman.   See
  2014-07-23T13:48:23 in this log http://eavesdrop.openstack.org/irclogs
  /%23openstack-oslo/%23openstack-oslo.2014-07-23.log

  The problem was discovered when an exception was used as replacement
  text in a debug log message:

 LOG.debug(Failed to mount image %(ex)s), {'ex': e})

  In particular it was discovered as part of enabling lazy translation,
  where the exception message is replaced with an object that does not
  support str().   Note that this would also fail without lazy enabled,
  if a translation for the exception message was provided that was
  unicode.

  
  Example trace: 

   Traceback (most recent call last):
File nova/tests/virt/disk/test_api.py, line 78, in 
test_can_resize_need_fs_type_specified
  self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True))
File nova/virt/disk/api.py, line 208, in is_image_partitionless
  fs.setup()
File nova/virt/disk/vfs/localfs.py, line 80, in setup
  LOG.debug(Failed to mount image %(ex)s), {'ex': e})
File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug
  self.logger.debug(msg, *args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug
  self._log(DEBUG, msg, args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log
  self.handle(record)
File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle
  self.callHandlers(record)
File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers
  hdlr.handle(record)
File nova/test.py, line 212, in handle
  self.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 723, in format
  return fmt.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 464, in format
  record.message = record.getMessage()
File /usr/lib/python2.7/logging/__init__.py, line 328, in getMessage
  msg = msg % self.args
File 
/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo/i18n/_message.py,
 line 167, in __str__
  raise UnicodeError(msg)
  UnicodeError: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead.
  ==
  FAIL: nova.tests.virt.disk.test_api.APITestCase.test_resize2fs_e2fsck_fails
  tags: worker-3

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348262] [NEW] PKI and PKIZ tokens contain unnecessary whitespace

2014-07-24 Thread Dolph Mathews
Public bug reported:

In the race to produce smaller PKI tokens, we've overlooked that we can
produce smaller JSON bodies by removing all whitespace between
structural characters. For example, the following JSON blobs are all
equally valid:

  { key : value }

... as compared to what we're producing today:

  {key:value}

... as compared to all unnecessary whitespace removed:

  {key:value}

This optimization would save us a few bytes in both PKI and PKIZ tokens.

** Affects: keystone
 Importance: Wishlist
 Assignee: Dolph Mathews (dolph)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1348262

Title:
  PKI and PKIZ tokens contain unnecessary whitespace

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  In the race to produce smaller PKI tokens, we've overlooked that we
  can produce smaller JSON bodies by removing all whitespace between
  structural characters. For example, the following JSON blobs are all
  equally valid:

{ key : value }

  ... as compared to what we're producing today:

{key:value}

  ... as compared to all unnecessary whitespace removed:

{key:value}

  This optimization would save us a few bytes in both PKI and PKIZ
  tokens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1348262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346424] [NEW] Baremetal node id not supplied to driver

2014-07-24 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

A random overcloud baremetal node fails to boot during check-tripleo-
overcloud-f20. Occurs intermittently.

Full logs:

http://logs.openstack.org/26/105326/4/check-tripleo/check-tripleo-overcloud-f20/9292247/
http://logs.openstack.org/81/106381/2/check-tripleo/check-tripleo-overcloud-f20/ca8a59b/
http://logs.openstack.org/08/106908/2/check-tripleo/check-tripleo-overcloud-f20/e9894ca/


Seed's nova-compute log shows this exception:

Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 ERROR oslo.messaging.rpc.dispatcher 
[req-9f090bea-a974-4f3c-ab06-ebd2b7a5c9e6 ] Exception during message handling: 
Baremetal node id not supplied to driver for 
'e13f2660-b72d-4a97-afac-64ff0eecc448'
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 133, in _dispatch_and_reply
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher incoming.message))
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 176, in _dispatch
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, 
method, ctxt, args)
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 122, in _do_dispatch
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, 
method)(ctxt, **new_args)
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py, line 88, 
in wrapped
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher payload)
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, 
self.value, self.tb)
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py, line 71, 
in wrapped
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, 
**kw)
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py, 
line 291, in decorated_function
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher pass
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, 
self.value, self.tb)
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py, 
line 277, in decorated_function
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher return function(self, context, 
*args, **kwargs)
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py, 
line 341, in decorated_function
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE oslo.messaging.rpc.dispatcher return function(self, context, 
*args, **kwargs)
Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 13:46:07.981 
3608 TRACE 

[Yahoo-eng-team] [Bug 1348204] Re: test_encrypted_cinder_volumes_cryptsetup times out waiting for volume to be available

2014-07-24 Thread Matt Riedemann
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348204

Title:
  test_encrypted_cinder_volumes_cryptsetup times out waiting for volume
  to be available

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/15/109115/1/check/check-tempest-dsvm-
  full/168a5dd/console.html#_2014-07-24_01_07_09_115

  2014-07-24 01:07:09.116 | 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup[compute,image,volume]
  2014-07-24 01:07:09.116 | 

  2014-07-24 01:07:09.116 | 
  2014-07-24 01:07:09.116 | Captured traceback:
  2014-07-24 01:07:09.117 | ~~~
  2014-07-24 01:07:09.117 | Traceback (most recent call last):
  2014-07-24 01:07:09.117 |   File tempest/test.py, line 128, in wrapper
  2014-07-24 01:07:09.117 | return f(self, *func_args, **func_kwargs)
  2014-07-24 01:07:09.117 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 63, in 
test_encrypted_cinder_volumes_cryptsetup
  2014-07-24 01:07:09.117 | self.attach_detach_volume()
  2014-07-24 01:07:09.117 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 49, in 
attach_detach_volume
  2014-07-24 01:07:09.117 | self.nova_volume_detach()
  2014-07-24 01:07:09.117 |   File tempest/scenario/manager.py, line 757, 
in nova_volume_detach
  2014-07-24 01:07:09.117 | self._wait_for_volume_status('available')
  2014-07-24 01:07:09.117 |   File tempest/scenario/manager.py, line 710, 
in _wait_for_volume_status
  2014-07-24 01:07:09.117 | self.volume_client.volumes, self.volume.id, 
status)
  2014-07-24 01:07:09.118 |   File tempest/scenario/manager.py, line 230, 
in status_timeout
  2014-07-24 01:07:09.118 | not_found_exception=not_found_exception)
  2014-07-24 01:07:09.118 |   File tempest/scenario/manager.py, line 296, 
in _status_timeout
  2014-07-24 01:07:09.118 | raise exceptions.TimeoutException(message)
  2014-07-24 01:07:09.118 | TimeoutException: Request timed out
  2014-07-24 01:07:09.118 | Details: Timed out waiting for thing 
4ef6a14a-3fce-417f-aa13-5aab1789436e to become available

  I've actually been seeing this out of tree in our internal CI also but
  thought it was just us or our slow VMs, this is the first I've seen it
  upstream.

  From the traceback in the console log, it looks like the volume does
  get to available status because it doesn't get out of that state when
  tempest is trying to delete the volume on tear down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346424] Re: Baremetal node id not supplied to driver

2014-07-24 Thread Dan Prince
** Project changed: tripleo = nova

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
 Assignee: (unassigned) = Dan Prince (dan-prince)

** Changed in: tripleo
   Importance: Undecided = Critical

** Changed in: tripleo
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346424

Title:
  Baremetal node id not supplied to driver

Status in OpenStack Compute (Nova):
  Triaged
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  A random overcloud baremetal node fails to boot during check-tripleo-
  overcloud-f20. Occurs intermittently.

  Full logs:

  
http://logs.openstack.org/26/105326/4/check-tripleo/check-tripleo-overcloud-f20/9292247/
  
http://logs.openstack.org/81/106381/2/check-tripleo/check-tripleo-overcloud-f20/ca8a59b/
  
http://logs.openstack.org/08/106908/2/check-tripleo/check-tripleo-overcloud-f20/e9894ca/

  
  Seed's nova-compute log shows this exception:

  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 ERROR oslo.messaging.rpc.dispatcher 
[req-9f090bea-a974-4f3c-ab06-ebd2b7a5c9e6 ] Exception during message handling: 
Baremetal node id not supplied to driver for 
'e13f2660-b72d-4a97-afac-64ff0eecc448'
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent 
call last):
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 133, in _dispatch_and_reply
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher incoming.message))
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 176, in _dispatch
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 122, in _do_dispatch
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py, line 88, 
in wrapped
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher payload)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py, line 71, 
in wrapped
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py, 
line 291, in decorated_function
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher pass
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py, 
line 277, in decorated_function
  Jul 21 13:46:08 

[Yahoo-eng-team] [Bug 1348288] [NEW] Resource tracker should report virt driver stats

2014-07-24 Thread Nicholas Randon
Public bug reported:

sha1 Nova at: 106fb458c7ac3cc17bb42d1b83ec3f4fa8284e71
sha1 ironic at: 036c79e38f994121022a69a0bc76917e0048fd63

The ironic driver passes stats to nova's resource tracker in
get_available_resources(). Sometimes these appear to get through to the
database without modification, sometimes they seem to be replaced
entirely by other stats generated by the resource tracker. The correct
behaviour should be to combine the two.

As an example, the following query on the compute_nodes table in nova's
database shows the contents for a tripleo system (all nodes are ironic):

mysql select hypervisor_hostname, stats from compute_nodes;
+--+-+
| hypervisor_hostname  | stats  

 |
+--+-+
| 4e014e26-2f90-4a91-a6f0-c1978df88369 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| fadb50bf-26ec-420c-a13f-f182e38569d6 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| ffe5a5bf-7151-468c-b9bb-980477e5f736 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| 752966ea-17f8-4d6d-87a4-03c91cb65354 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| f2f0ecb1-6234-4975-808f-a17534c9ae6c | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| 9adf4551-24f0-43a7-9267-a20cfa309137 | {cpu_arch: amd64, ironic_driver: 
ironic.nova.virt.ironic.driver.IronicDriver}  
 |
| 1bd13fc5-4938-4781-9680-ad1e0ccec77c | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| 88a39f5d-6174-47c9-9817-13d08bf2e079 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| ec6b5dc6-de38-4e23-a967-b87c10da37e3 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| ac52fd79-e0b9-4749-b794-590d5c181b4a | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| a1b81342-ed57-4310-8d5b-a2aa48718f1f | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| 0588e463-748a-4248-9110-6e18988cfa4e | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| 8f73d8dc-5d8c-47b0-a866-b829edc3667f | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| bac38b1d-f7f9-4770-9195-ff204a0c05c3 | {cpu_arch: amd64, ironic_driver: 
ironic.nova.virt.ironic.driver.IronicDriver}  
 |
| 62cc33f7-701b-47f6-8f50-3f7c1ca0f0a3 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| af7f79bf-b2c1-405b-9bc7-5370b93b08cf | {cpu_arch: amd64, ironic_driver: 
ironic.nova.virt.ironic.driver.IronicDriver}  
 |
| 4615c72a-9ea0-433e-8c52-308163112f89 | {cpu_arch: amd64, ironic_driver: 
ironic.nova.virt.ironic.driver.IronicDriver}  
 |
| 680e6aa7-9a84-41de-94ba-b761d48b4087 | {num_task_None: 1, io_workload: 0, 
num_instances: 1, num_vm_active: 1, num_vcpus_used: 24, 
num_os_type_None: 1, num_proj_505908300744403496b2e64b06606529: 1} |
| 

[Yahoo-eng-team] [Bug 1348309] [NEW] Migration of legacy router to distributed router not working

2014-07-24 Thread Carl Baldwin
Public bug reported:

This was a know backlog item when DVR code merged.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348309

Title:
  Migration of legacy router to distributed router not working

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This was a know backlog item when DVR code merged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348226] Re: openvswitch does not support RPC version 1.3

2014-07-24 Thread Armando Migliaccio
openvswitch plugin is being removed by the tree at the end of Juno, not
sure if this is relevant anymore

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348226

Title:
  openvswitch does not support RPC version 1.3

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When running devstack on master with Q_PLUGIN=openvswitch I try to
  boot an instance, but boot fails with q-svc and q-agt reporting that
  the endpoint does not support RPC version 1.3.

  q-svc logs:

  2014-07-24 13:38:41.450 ERROR oslo.messaging.rpc.dispatcher [^[[00;36m-] 
^[[01;35mException during message handling: Endpoint does not support RPC 
version 1.3^[[00m
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher 
^[[01;35m^[[00mTraceback (most recent call last):
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m  
File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 134, in _dispatch_and_reply
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m   
 incoming.message))
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m  
File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 186, in _dispatch
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m   
 raise UnsupportedVersion(version)
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher 
^[[01;35m^[[00mUnsupportedVersion: Endpoint does not support RPC version 1.3
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m
  2014-07-24 13:38:41.452 ERROR oslo.messaging._drivers.common [^[[00;36m-] 
^[[01;35mReturning exception Endpoint does not support RPC version 1.3 to 
caller^[[00m
  2014-07-24 13:38:41.452 ERROR oslo.messaging._drivers.common [^[[00;36m-] 
^[[01;35m['Traceback (most recent call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
186, in _dispatch\nraise UnsupportedVersion(version)\n', 
'UnsupportedVersion: Endpoint does not support RPC version 1.3\n']^[[00m

  q-agt logs:

  2014-07-24 13:38:53.738 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[^[[01;36mreq-6bff6e0e-d381-4ebb-a7e3-e9feef0165f3 ^[[00;36mNone None] 
^[[01;35mprocess_ancillary_network_ports - iteration:146 - failure while 
retrieving port details from server^[[00m
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00mTraceback 
(most recent call last):
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m  File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, 
line 1314, in process_ancillary_network_ports
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m
self.treat_ancillary_devices_added(port_info['added'])
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m  File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, 
line 1202, in treat_ancillary_devices_added
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00mraise 
DeviceListRetrievalError(devices=devices, error=e)
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
^[[01;35m^[[00mDeviceListRetrievalError: Unable to retrieve port details for 
devices: set([u'1ec476d2-f565-493e-82d2-8d9da89962fb']) because of error: 
Remote error: UnsupportedVersion Endpoint does not support RPC version 1.3
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m[u'Traceback 
(most recent call last):\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
186, in _dispatch\nraise UnsupportedVersion(version)\n', 
u'UnsupportedVersion: Endpoint does not support RPC version 1.3\n'].

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306559] Re: Fix python26 compatibility for RFCSysLogHandler

2014-07-24 Thread Jeff Peeler
For heat, this was merged in
https://git.openstack.org/cgit/openstack/heat/commit/?id=ea911b0210c4b4317de6bd371c25f5cb9c255655
and has already been released in j1.

** Changed in: heat
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1306559

Title:
  Fix python26 compatibility for RFCSysLogHandler

Status in OpenStack Telemetry (Ceilometer):
  Triaged
Status in Cinder:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  Confirmed
Status in Murano:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Committed

Bug description:
  Currently used pattern 
https://review.openstack.org/#/c/63094/15/openstack/common/log.py (lines 
471-479)  will fail for Python 2.6.x.
  In order to fix the broken Python 2.6.x compatibility, old style explicit 
superclass method calls should be used instead.

  Here is an example of how to check this for Python v2.7 and v2.6: 
  import logging.handlers
  print type(logging.handlers.SysLogHandler)
  print type(logging.Handler)

  Results would be:
  Python 2.7: type 'type', so super() may be used for 
RFCSysLogHandler(logging.handlers.SysLogHandler)
  Python 2.6:type 'classobj', so super() may *NOT* be used for 
RFCSysLogHandler(logging.handlers.SysLogHandler)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1306559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1094134] Re: When using LVM backed images the audit 'Total Disk' reports for system drive

2014-07-24 Thread Dan Genin
This has been fixed in

commit 02ea0f9f9e5c7f022b465a96ba3a4f089c633bee
Merge: cd2008c 9d3f524
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Jan 11 23:35:10 2013 +

 Merge Correct the calculating of disk size when using lvm disk
backend.


** Changed in: nova
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1094134

Title:
  When using LVM backed images the audit 'Total Disk' reports for system
  drive

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When running nova with LVM backed instances, the 'Total Disk' usage
  audit doesn't report the usage of the instance VG

  In the case below it should report 4.5Tb not 629Gb.
  This could cause oversubscription of the backing VG.

  2012-12-27 16:47:04 nova.compute.claims: AUDIT [req-97261ff9-d8ab-
  46b0-a2c8-14fff60fe90a cbe2adf3fccb415d941c9d4092cbd840
  29eab673891a46f8b44e78830243d2b9] Total Disk: 629 GB, used: 0 GB

  
   LV  VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
lv_nova vgsystem -wi-ao 639.75g  
lvhome  vgsystem -wi-ao   8.00g  
lvopt   vgsystem -wi-ao   4.00g  
lvroot  vgsystem -wi-ao   4.00g  
lvtmp   vgsystem -wi-ao   4.00g  
lvusr   vgsystem -wi-ao   5.97g  
lvvar   vgsystem -wi-ao 136.00g   

VG #PV #LV #SN Attr   VSize   VFree
nova-instances   1   1   0 wz--n-   4.54t 4.52t
vgsystem 1   7   0 wz--n- 801.72g0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1094134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1198831] Re: Compute service wrong report disk usage when libvirt_images_type is LVM

2014-07-24 Thread Dan Genin
This appears to be a duplicate of
https://bugs.launchpad.net/nova/+bug/1094134, which was fixed in

commit 02ea0f9f9e5c7f022b465a96ba3a4f089c633bee
Merge: cd2008c 9d3f524
Author: Jenkins jenk...@review.openstack.org
Date: Fri Jan 11 23:35:10 2013 +

Merge Correct the calculating of disk size when using lvm disk
backend.


** Changed in: nova
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1198831

Title:
  Compute service wrong report disk usage when  libvirt_images_type is
  LVM

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Heloo,

  I obsereved wrong compute node disk usage when I setup
  libvirt_images_type to lvm.  The compute service still reports
  filesystem usage (instances_path) instead of volume group usage
  (libvirt_images_volume_group).

  Fo exmample:
instances_path ~= 50GB
libvirt_images_volume_group = foo_vg, where foo_vg ~= 1000GB
create few Instances (6GB root volume + 300GB ephemeral),
create a few volumes in same VG (by cinder or nova-volumes of 580GB size)

  The status shows:
  -
  local_gb_used = OK
  local_gb = Wrong
  free_disk_gb = Wrong
  disk_available_least = Wrong

  select 
s.availability_zone,c.local_gb,c.local_gb_used,c.free_disk_gb,c.disk_available_least
 from compute_nodes as c, services as s where s.id=c.service_id and 
s.availability_zone='foo_av_zone';
  
+---+--+---+--+--+
  | availability_zone | local_gb | local_gb_used | free_disk_gb | 
disk_available_least |
  
+---+--+---+--+--+
  | some_av_zone  |   49 |   306 | -257 |   
49 |
  
+---+--+---+--+--+

  Instead of:
  
  local_gb_used = OK (ephemeral + root volume)
  local_gb = OK (whole space)
  free_disk_gb =  OK (free = local_gb - ephemeral - root volume)
  disk_available_least = OK (free space in vg disk_available_least = 
local_gb - ephemeral - root volume - volume)

  select 
s.availability_zone,c.local_gb,c.local_gb_used,c.free_disk_gb,c.disk_available_least
 from compute_nodes as c, services as s where s.id=c.service_id and 
s.availability_zone='foo_av_zone';
  
+---+--+---+--+--+
  | availability_zone | local_gb | local_gb_used | free_disk_gb | 
disk_available_least |
  
+---+--+---+--+--+
  | foo_av_zone   | 1003 |   306 |  697 |   
   117 |
  
+---+--+---+--+--+

  I attached a patch which fix the reporting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1198831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348368] [NEW] ERROR(s) and WARNING(s) during tox -e docs

2014-07-24 Thread Davanum Srinivas (DIMS)
Public bug reported:

ERROR(s):

dims@dims-mac:~/openstack/nova$ grep ERROR ~/junk/docs.log | sort | uniq -c
   2 /Users/dims/openstack/nova/nova/compute/manager.py:docstring of 
nova.compute.manager.ComputeVirtAPI.wait_for_instance_event:24: ERROR: 
Unexpected indentation.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:100: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:110: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:135: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:138: ERROR: Unknown interpreted text 
role paramref.
   4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:143: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:156: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:190: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:228: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:233: ERROR: Unknown interpreted text 
role paramref.
   4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:255: ERROR: Unknown interpreted text 
role paramref.
   6 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:265: ERROR: Unknown interpreted text 
role paramref.
   4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:282: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:293: ERROR: Unknown interpreted text 
role paramref.
   4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:299: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:307: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:318: ERROR: Unknown interpreted text 
role paramref.
   4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:322: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:360: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:389: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:432: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:446: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:452: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:513: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:517: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:546: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:559: ERROR: Unknown interpreted text 
role paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:55: ERROR: Unknown interpreted text role 
paramref.
   2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:572: ERROR: Unknown interpreted text 
role paramref.
   4 

[Yahoo-eng-team] [Bug 1004114] Re: Password logging

2014-07-24 Thread Nathan Kinder
We should write an OSSN for this so people are aware of the fact that
passwords for users will be logged in Horizon if debug logging is
enabled.  Now that a keystoneclient patch has been merged, we will soon
have a release that doesn't log passwords anymore.  We should recommend
using the newer keystoneclient as soon as it's available.

** Also affects: ossn
   Importance: Undecided
   Status: New

** Changed in: ossn
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1004114

Title:
  Password logging

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Security Notes:
  New
Status in Python client library for Keystone:
  Fix Committed

Bug description:
  When the log level is set to DEBUG, keystoneclient's full-request
  logging mechanism kicks in, exposing plaintext passwords, etc.

  This bug is mostly out of the scope of Horizon, however Horizon can
  also be more secure in this regard. We should make sure that wherever
  we *are* handling sensitive data we use Django's error report
  filtering mechanisms so they don't appear in tracebacks, etc.
  (https://docs.djangoproject.com/en/dev/howto/error-reporting
  /#filtering-error-reports)

  Keystone may also want to look at respecting such annotations in their
  logging mechanism, i.e. if Django were properly annotating these data
  objects, keystoneclient could check for those annotations and properly
  sanitize the log output.

  If not this exact mechanism, then something similar would be wise.

  For the time being, it's also worth documenting in both projects that
  a log level of DEBUG will log passwords in plain text.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1004114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348226] Re: openvswitch does not support RPC version 1.3

2014-07-24 Thread Robbie Harwood
This happens on master with Q_PLUGIN=linuxbridge as well.  I can provide
logs, but they're almost identical to my eye.

** Changed in: neutron
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348226

Title:
  openvswitch does not support RPC version 1.3

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When running devstack on master with Q_PLUGIN=openvswitch I try to
  boot an instance, but boot fails with q-svc and q-agt reporting that
  the endpoint does not support RPC version 1.3.

  q-svc logs:

  2014-07-24 13:38:41.450 ERROR oslo.messaging.rpc.dispatcher [^[[00;36m-] 
^[[01;35mException during message handling: Endpoint does not support RPC 
version 1.3^[[00m
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher 
^[[01;35m^[[00mTraceback (most recent call last):
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m  
File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 134, in _dispatch_and_reply
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m   
 incoming.message))
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m  
File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 186, in _dispatch
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m   
 raise UnsupportedVersion(version)
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher 
^[[01;35m^[[00mUnsupportedVersion: Endpoint does not support RPC version 1.3
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m
  2014-07-24 13:38:41.452 ERROR oslo.messaging._drivers.common [^[[00;36m-] 
^[[01;35mReturning exception Endpoint does not support RPC version 1.3 to 
caller^[[00m
  2014-07-24 13:38:41.452 ERROR oslo.messaging._drivers.common [^[[00;36m-] 
^[[01;35m['Traceback (most recent call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
186, in _dispatch\nraise UnsupportedVersion(version)\n', 
'UnsupportedVersion: Endpoint does not support RPC version 1.3\n']^[[00m

  q-agt logs:

  2014-07-24 13:38:53.738 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[^[[01;36mreq-6bff6e0e-d381-4ebb-a7e3-e9feef0165f3 ^[[00;36mNone None] 
^[[01;35mprocess_ancillary_network_ports - iteration:146 - failure while 
retrieving port details from server^[[00m
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00mTraceback 
(most recent call last):
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m  File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, 
line 1314, in process_ancillary_network_ports
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m
self.treat_ancillary_devices_added(port_info['added'])
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m  File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, 
line 1202, in treat_ancillary_devices_added
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00mraise 
DeviceListRetrievalError(devices=devices, error=e)
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
^[[01;35m^[[00mDeviceListRetrievalError: Unable to retrieve port details for 
devices: set([u'1ec476d2-f565-493e-82d2-8d9da89962fb']) because of error: 
Remote error: UnsupportedVersion Endpoint does not support RPC version 1.3
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m[u'Traceback 
(most recent call last):\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
186, in _dispatch\nraise UnsupportedVersion(version)\n', 
u'UnsupportedVersion: Endpoint does not support RPC version 1.3\n'].

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348402] [NEW] DB Instances are in Error State

2014-07-24 Thread Amogh
Public bug reported:

Steps to Reproduce the Issue:


1. Login to the DevStack with admin account
2. Create new DB Instance in Databases Page.
3. Observe that Database Instance will get into Error  state.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Databses_Instance_Error
   
https://bugs.launchpad.net/bugs/1348402/+attachment/4162281/+files/Databses_Instance_Error.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348402

Title:
  DB Instances are in Error State

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to Reproduce the Issue:

  
  1. Login to the DevStack with admin account
  2. Create new DB Instance in Databases Page.
  3. Observe that Database Instance will get into Error  state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348409] [NEW] Object Name may not be auto populated

2014-07-24 Thread Mohan Seri
Public bug reported:

After choosing a file from Container  Upload Object pop up, Object Name
is auto populated with file name.

If user want to go with file name as Object Name (which is mandatory),
the user cannot upload the file since Upload Object is disabled.

Since Object Name is mandatory and user does not want to enter one since
it is being populated, the user cannot upload the file since Upload
Object is disabled.

In order for user to upload object, user has to perform any keystroke in
the text box  in order for Upload Object button to be enabled. This is a
bit confusing.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Upload Object Issue.PNG
   
https://bugs.launchpad.net/bugs/1348409/+attachment/4162353/+files/Upload%20Object%20Issue.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348409

Title:
  Object Name may not be auto populated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After choosing a file from Container  Upload Object pop up, Object
  Name is auto populated with file name.

  If user want to go with file name as Object Name (which is mandatory),
  the user cannot upload the file since Upload Object is disabled.

  Since Object Name is mandatory and user does not want to enter one
  since it is being populated, the user cannot upload the file since
  Upload Object is disabled.

  In order for user to upload object, user has to perform any keystroke
  in the text box  in order for Upload Object button to be enabled. This
  is a bit confusing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348404] [NEW] Object Name may not be auto populated

2014-07-24 Thread Mohan Seri
Public bug reported:

After choosing a file from Container  Upload Object pop up, Object Name
is auto populated with file name.

If user want to go with file name as Object Name (which is mandatory),
the user cannot upload the file since Upload Object is disabled.

Since Object Name is mandatory and user does not want to enter one since
it is being populated, the user cannot upload the file since Upload
Object is disabled.

In order for user to upload object, user has to perform any keystroke in
the text box  in order for Upload Object button to be enabled. This is a
bit confusing.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Upload Object Issue.PNG
   
https://bugs.launchpad.net/bugs/1348404/+attachment/4162338/+files/Upload%20Object%20Issue.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348404

Title:
  Object Name may not be auto populated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After choosing a file from Container  Upload Object pop up, Object
  Name is auto populated with file name.

  If user want to go with file name as Object Name (which is mandatory),
  the user cannot upload the file since Upload Object is disabled.

  Since Object Name is mandatory and user does not want to enter one
  since it is being populated, the user cannot upload the file since
  Upload Object is disabled.

  In order for user to upload object, user has to perform any keystroke
  in the text box  in order for Upload Object button to be enabled. This
  is a bit confusing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348411] [NEW] DB Instance created in computeInstance Page

2014-07-24 Thread Amogh
Public bug reported:

Steps to reproduce the Issue:

1. Login to DevStack with admin account
2. Go to DB Instances Page and Create an Instance
3. Observe that DB Instance being Created both in DB Instance Page, also in 
ComputeInstances page.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: DB_Instance_in Compute  Instance Page
   
https://bugs.launchpad.net/bugs/1348411/+attachment/4162354/+files/DB_Instance%20in%20Instance%20Page.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348411

Title:
  DB Instance created in computeInstance Page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce the Issue:

  1. Login to DevStack with admin account
  2. Go to DB Instances Page and Create an Instance
  3. Observe that DB Instance being Created both in DB Instance Page, also in 
ComputeInstances page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348415] [NEW] Duplicate mysql Images create after enabling Trove Service

2014-07-24 Thread Amogh
Public bug reported:

Steps to Reproduce the Issue:

1. Login to the DevStack using admin account
2. Navigate to Images page
3. Observe that Duplicate Images Created for MySQL (ubunutu_mysql)

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Duplicate Images after enabling Trove service.
   
https://bugs.launchpad.net/bugs/1348415/+attachment/4162361/+files/Duplicate_Images.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348415

Title:
  Duplicate mysql Images create after enabling Trove Service

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to Reproduce the Issue:

  1. Login to the DevStack using admin account
  2. Navigate to Images page
  3. Observe that Duplicate Images Created for MySQL (ubunutu_mysql)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348421] [NEW] VM Can't get DHCP IP

2014-07-24 Thread Ramy Allam
Public bug reported:

Hello,

I'm running OpenStack RDO with the following setup.

- 1x Controller Node ( Neutron - Nova -   glance - Horizon - Keystone - GRE +   
FlatDHCPManager)
-  10x Compute Nodes (   Nova Compute  - KVM ) 
- OS : CentOS 6 - 64bit

Suddenly all vm's on the compute nodes can't get new IP address from the
controller node. It was working properly hours ago without any
modifications from me. I tried to restart neutron-dhcp-agent - qpidd and
dnsmasq with no luck.

This log from the controller node - **many errors**  
 http://paste.openstack.org/show/88016/

And this from the compute node
 http://paste.openstack.org/show/88017/

 Realtime compute node log after restarting vm network service
 http://paste.openstack.org/show/88039/

Installed packages on Controller Node :
# rpm -qa | grep -i openstack

openstack-selinux-0.1.3-2.el6ost.noarch
openstack-puppet-modules-2013.2-9.1.el6.noarch
openstack-ceilometer-api-2013.2.3-2.el6.noarch
openstack-packstack-2013.2.1-0.36.dev1013.el6.noarch
openstack-nova-scheduler-2013.2.3-1.el6.noarch
openstack-ceilometer-common-2013.2.3-2.el6.noarch
python-django-openstack-auth-1.1.2-1.el6.noarch
openstack-ceilometer-central-2013.2.3-2.el6.noarch
openstack-ceilometer-collector-2013.2.3-2.el6.noarch
openstack-neutron-openvswitch-2013.2.3-9.el6.noarch
openstack-nova-common-2013.2.3-1.el6.noarch
openstack-packstack-puppet-2013.2.1-0.36.dev1013.el6.noarch
openstack-glance-2013.2.3-2.el6.noarch
openstack-nova-conductor-2013.2.3-1.el6.noarch
openstack-nova-novncproxy-2013.2.3-1.el6.noarch
openstack-nova-cert-2013.2.3-1.el6.noarch
openstack-keystone-2013.2.3-3.el6.noarch
openstack-neutron-2013.2.3-9.el6.noarch
openstack-ceilometer-alarm-2013.2.3-2.el6.noarch
openstack-dashboard-2013.2.3-1.el6.noarch
openstack-nova-api-2013.2.3-1.el6.noarch
openstack-nova-console-2013.2.3-1.el6.noarch
openstack-utils-2013.2-2.el6.noarch

On Controller Node

# ps aux | grep -i dhcp
nobody   12639  0.0  0.0  12884   864 ?SJul19   0:00 dnsmasq 
--no-hosts --no-resolv --strict-order --bind-interfaces 
--interface=tap7139b265-41 --except-interface=lo 
--pid-file=/var/lib/neutron/dhcp/c19ca2ea-8278-4069-bfea-dadd92961cac/pid 
--dhcp-hostsfile=/var/lib/neutron/dhcp/c19ca2ea-8278-4069-bfea-dadd92961cac/host
 
--dhcp-optsfile=/var/lib/neutron/dhcp/c19ca2ea-8278-4069-bfea-dadd92961cac/opts 
--leasefile-ro --dhcp-range=tag0,10.0.0.0,static,86400s --dhcp-lease-max=256 
--conf-file= --domain=openstacklocal
neutron  24884  0.0  0.1 273748 32140 ?SJul24   0:00 
/usr/bin/python /usr/bin/neutron-dhcp-agent --log-file 
/var/log/neutron/dhcp-agent.log --config-file 
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/dhcp_agent.ini

# ip netns
qrouter-id
qdhcp-id

# ip netns exec qdhcp-network-id ip a
16: tap7139b265-41: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state 
UNKNOWN 
link/ether fa:16:3e:2e:18:35 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global tap7139b265-41
inet6 fe80::f816:3eff:fe2e:1835/64 scope link 
   valid_lft forever preferred_lft forever
17: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
 ip netns exec qdhcp-network-id ifconfig
loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:4 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:2304 (2.2 KiB)  TX bytes:2304 (2.2 KiB)

tap7139b265-41 Link encap:Ethernet  HWaddr FA:16:3E:2E:18:35  
  inet addr:10.0.0.3  Bcast:10.0.0.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fe2e:1835/64 Scope:Link
  UP BROADCAST RUNNING  MTU:1500  Metric:1
  RX packets:91007 errors:0 dropped:0 overruns:0 frame:0
  TX packets:727 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:4633459 (4.4 MiB)  TX bytes:193767 (189.2 KiB)

Regards,

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348421

Title:
  VM Can't get DHCP IP

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hello,

  I'm running OpenStack RDO with the following setup.

  - 1x Controller Node ( Neutron - Nova -   glance - Horizon - Keystone - GRE 

[Yahoo-eng-team] [Bug 1323511] Re: notification will be emitted if deleting a non-exist floatingip

2014-07-24 Thread Liusheng
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323511

Title:
  notification will be emitted if deleting a non-exist floatingip

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  If  I try to delete a non-exist floatingip, neutron will generate a
  delete.start notification, unlike others nova's deleting API ,this
  is unreasonable, and the notification will be capture by ceilometer
  and effect the metric in ceilometer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348447] [NEW] Enable metadata when create server groups

2014-07-24 Thread Jay Lau
Public bug reported:

instance_group object already support instance group metadata but the
api extension do not support this.

We should enable this by default.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348447

Title:
  Enable metadata when create server groups

Status in OpenStack Compute (Nova):
  New

Bug description:
  instance_group object already support instance group metadata but the
  api extension do not support this.

  We should enable this by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323729] Re: Remove Open vSwitch and Linuxbridge plugins from the Neutron tree

2014-07-24 Thread Armando Migliaccio
We'll need to remove the options from DevStack too, we don't want people
to freak out if they still use old localrc!

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New = Confirmed

** Changed in: devstack
 Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323729

Title:
  Remove Open vSwitch and Linuxbridge plugins from the Neutron tree

Status in devstack - openstack dev environments:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  This bug will track the removal of the Open vSwitch and Linuxbridge
  plugins from the Neutron source tree. These were deprecated in
  Icehouse and will be removed before Juno releases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1323729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348226] Re: openvswitch does not support RPC version 1.3

2014-07-24 Thread Armando Migliaccio
This issue has been introduced in Juno. Since both plugins are marked
for removal (and I don't believe the decision is going to be reverted) I
don't think this is worth addressing. That said, if things changed,
we'll definitely need to look into it.

** Changed in: neutron
   Status: New = Invalid

** Summary changed:

- openvswitch does not support RPC version 1.3
+ openvswitch/linuxbridge outdated RPC support

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348226

Title:
  openvswitch/linuxbridge outdated RPC support

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When running devstack on master with Q_PLUGIN=openvswitch I try to
  boot an instance, but boot fails with q-svc and q-agt reporting that
  the endpoint does not support RPC version 1.3.

  q-svc logs:

  2014-07-24 13:38:41.450 ERROR oslo.messaging.rpc.dispatcher [^[[00;36m-] 
^[[01;35mException during message handling: Endpoint does not support RPC 
version 1.3^[[00m
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher 
^[[01;35m^[[00mTraceback (most recent call last):
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m  
File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 134, in _dispatch_and_reply
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m   
 incoming.message))
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m  
File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 186, in _dispatch
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m   
 raise UnsupportedVersion(version)
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher 
^[[01;35m^[[00mUnsupportedVersion: Endpoint does not support RPC version 1.3
  2014-07-24 13:38:41.450 TRACE oslo.messaging.rpc.dispatcher ^[[01;35m^[[00m
  2014-07-24 13:38:41.452 ERROR oslo.messaging._drivers.common [^[[00;36m-] 
^[[01;35mReturning exception Endpoint does not support RPC version 1.3 to 
caller^[[00m
  2014-07-24 13:38:41.452 ERROR oslo.messaging._drivers.common [^[[00;36m-] 
^[[01;35m['Traceback (most recent call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
186, in _dispatch\nraise UnsupportedVersion(version)\n', 
'UnsupportedVersion: Endpoint does not support RPC version 1.3\n']^[[00m

  q-agt logs:

  2014-07-24 13:38:53.738 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[^[[01;36mreq-6bff6e0e-d381-4ebb-a7e3-e9feef0165f3 ^[[00;36mNone None] 
^[[01;35mprocess_ancillary_network_ports - iteration:146 - failure while 
retrieving port details from server^[[00m
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00mTraceback 
(most recent call last):
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m  File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, 
line 1314, in process_ancillary_network_ports
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m
self.treat_ancillary_devices_added(port_info['added'])
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m  File 
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, 
line 1202, in treat_ancillary_devices_added
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00mraise 
DeviceListRetrievalError(devices=devices, error=e)
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
^[[01;35m^[[00mDeviceListRetrievalError: Unable to retrieve port details for 
devices: set([u'1ec476d2-f565-493e-82d2-8d9da89962fb']) because of error: 
Remote error: UnsupportedVersion Endpoint does not support RPC version 1.3
  2014-07-24 13:38:53.738 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ^[[01;35m^[[00m[u'Traceback 
(most recent call last):\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
186, in _dispatch\nraise UnsupportedVersion(version)\n', 
u'UnsupportedVersion: Endpoint does not support RPC version 1.3\n'].

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1346820] Re: Middeware auth_token fails with scoped federated saml token

2014-07-24 Thread Morgan Fainberg
If anything this is a bug against the keystonemiddleware package not
keystone.

** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1346820

Title:
  Middeware auth_token fails with scoped federated saml token

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Identity  (Keystone) Middleware:
  New

Bug description:
  Do the following steps
  1) Set up keystone for federation.
  2) Generated a unscoped federated token
  3) Generate a scoped token using token in step 2
  4) Set up nova/glance for using keystone v3 API.
  5) Try an image list command using following request

  Request

  GET http://sp.machine:9292/v2/images
  Headers:
  Content-Type: application/json
  Accept: application/json
  X-Auth-Token: e92a49262a8d403db838d6494e4f9991

  6) This will break the auth_token(middleware\auth_token.py) middleware
  with key error at the following place

  user = token['user']
  user_domain_id = user['domain']['id']
  user_domain_name = user['domain']['name']
  in the function _build_user_headers.

  This is because the token does not contain any domain id or name under
  the user info, since federated tokens have no information about the
  user

  This can be fixed, simply by putting an if condition around the
  problematic code. I have tested this fix and then able to get image
  list and server list using glance and nova rest apis.

  Example
  vim /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py

  
   893 if 'domain' in user:
   894 user_domain_id = user['domain']['id']
   895 user_domain_name = user['domain']['name']

  
  Following is the token information, not that there is no domain under users

  {
    token: {
  methods: [
    saml2
  ],
  roles: [
    {
  id: aad3b40ebb3b442f8fe85e88b21f3b4c,
  name: admin
    }
  ],
  expires_at: 2014-07-22T10:15:05.367852Z,
  project: {
    domain: {
  id: default,
  name: Default
    },
    id: 6e99b7d923bc437381fd1b2b4d890339,
    name: admin
  },
  catalog: [
    {
  endpoints: [
    {
  url: https://127.0.0.1/keystone/main/v3;,
  interface: internal,
  region: regionOne,
  id: f5dad391109542cba959d2e27c5fe3a2
    },
    {
  url: https://172.20.15.103:8443/keystone/main/v3;,
  interface: public,
  region: regionOne,
  id: 4f76970e4ab5497d9149d56d455499ac
    },
    {
  url: https://172.20.15.103:8443/keystone/admin/v3;,
  interface: admin,
  region: regionOne,
  id: b85e76ca32f640c4a4d84068c71d3bf2
    },
    {
  url: https://172.20.15.103:8443/keystone/admin/v2.0;,
  interface: admin,
  region: regionOne,
  id: 1ae909491d754aeb8c8b8a5c5fa6ad47
    },
    {
  url: https://127.0.0.1/keystone/main/v2.0;,
  interface: internal,
  region: regionOne,
  id: daf4ce3876d04285a106d86e0fea9bd1
    },
    {
  url: https://172.20.15.103:8443/keystone/main/v2.0;,
  interface: public,
  region: regionOne,
  id: f763c80100954bc4805cf51b3dddb84b
    }
  ],
  type: identity,
  id: 0f79e21861a94fcd84b72cae3ebd79e5
    },
    {
  endpoints: [
    {
  url: http://172.20.15.103:9292;,
  interface: admin,
  region: RegionOne,
  id: 16ffa8cebadd4d239744ea168efcd109
    },
    {
  url: http://172.20.15.103:9292;,
  interface: internal,
  region: RegionOne,
  id: 944adaa070f44f21aa8a73fab15f07bb
    },
    {
  url: http://127.0.0.1:9292;,
  interface: public,
  region: RegionOne,
  id: cd945f6a5ee8410bbfe8d3572e23ee5d
    }
  ],
  type: image,
  id: fe5d67da897b4359810d95e2c591fe21
    },
    {
  endpoints: [
    {
  url: 
http://172.20.15.103:8776/v1/6e99b7d923bc437381fd1b2b4d890339;,
  interface: admin,
  region: RegionOne,
  id: 6d93d29279a6483783298eb67159b5c6
    },
    {
  url: 
http://172.20.15.103:8776/v1/6e99b7d923bc437381fd1b2b4d890339;,
  interface: internal,
  region: RegionOne,
  id: 9416222ad31a411294718b8fe4988daf
    },
  

[Yahoo-eng-team] [Bug 1348479] [NEW] _extend_extra_router_dict does not handle boolean correctly

2014-07-24 Thread Armando Migliaccio
Public bug reported:

Method:

https://github.com/openstack/neutron/blob/master/neutron/db/l3_attrs_db.py#L50

is used to add extension attributes to the router object during the
handling of the API response. When attributes are unspecified, the
router is extended with default values.

In the case of boolean attributes things don't work as they should,
because a default value as True takes over on the right side of the
boolean expression on:

https://github.com/openstack/neutron/blob/master/neutron/db/l3_attrs_db.py#L56

The end user so is led to believe that the server did not honor the
request, when effectively it did.

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348479

Title:
  _extend_extra_router_dict does not handle boolean correctly

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Method:

  https://github.com/openstack/neutron/blob/master/neutron/db/l3_attrs_db.py#L50

  is used to add extension attributes to the router object during the
  handling of the API response. When attributes are unspecified, the
  router is extended with default values.

  In the case of boolean attributes things don't work as they should,
  because a default value as True takes over on the right side of the
  boolean expression on:

  https://github.com/openstack/neutron/blob/master/neutron/db/l3_attrs_db.py#L56

  The end user so is led to believe that the server did not honor the
  request, when effectively it did.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp