[Yahoo-eng-team] [Bug 1440773] Re: Remove WritableLogger as eventlet has a real logger interface in 0.17.2

2015-06-09 Thread Thomas Bechtold
** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
 Assignee: (unassigned) => Thomas Bechtold (toabctl)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440773

Title:
  Remove WritableLogger as eventlet has a real logger interface in
  0.17.2

Status in Manila:
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Logging configuration library for OpenStack:
  Fix Committed

Bug description:
  Info from Sean on IRC:

  the patch to use a real logger interface in eventlet has been released
  in 0.17.2, which means we should be able to phase out
  https://github.com/openstack/oslo.log/blob/master/oslo_log/loggers.py

  Eventlet PR was:
  https://github.com/eventlet/eventlet/pull/75

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/manila/+bug/1440773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463300] [NEW] Delete device error when delete instance from nova firstly

2015-06-09 Thread changzhi
Public bug reported:

I create a device in Tacker and by using nova's driver. This device can
not delete by tacker's command if I delete the instance from nova
directly. It is necessary to catch exception in nova's driver. I fix it
in https://review.openstack.org/#/c/189580/.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463300

Title:
  Delete device error when delete instance from nova firstly

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I create a device in Tacker and by using nova's driver. This device
  can not delete by tacker's command if I delete the instance from nova
  directly. It is necessary to catch exception in nova's driver. I fix
  it in https://review.openstack.org/#/c/189580/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463301] [NEW] Migration 589f9237ca0e failed during upgrade on PostgreSQL

2015-06-09 Thread Ann Kamyshnikova
Public bug reported:

PostgreSQL is more sensitive with types then MySQL, so it creates
separate type when a Enum is created. In migration 589f9237ca0e type
profile_type is trying to be created, but the type with such name was
already created in havana_initial migration.

Trace with exception: http://paste.openstack.org/show/276962/

Steps to reproduce:
 
1. neutron-db-manage ... upgrade juno
2. neutron-db-manage ... upgrade head

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463301

Title:
  Migration 589f9237ca0e failed during upgrade on PostgreSQL

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  PostgreSQL is more sensitive with types then MySQL, so it creates
  separate type when a Enum is created. In migration 589f9237ca0e type
  profile_type is trying to be created, but the type with such name was
  already created in havana_initial migration.

  Trace with exception: http://paste.openstack.org/show/276962/

  Steps to reproduce:
   
  1. neutron-db-manage ... upgrade juno
  2. neutron-db-manage ... upgrade head

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336937] Re: Ironic nova driver does not set disk_available_least

2015-06-09 Thread Michael Davies
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336937

Title:
  Ironic nova driver does not set disk_available_least

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Looking at the libvirt driver for an example, it would appear that the
  ironic nova driver does not set a value for disk_available_least. This
  value is NULL in the nova.compute_nodes table.

  disk_available_least MUST be an integer value, since not setting this
  is causing this tempest test to fail JSON schema validation:

  
tempest.api.compute.admin.test_hypervisor.HypervisorAdminTestJSON.test_get_hypervisor_show_details

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1336937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463310] [NEW] image share to tenant cause internal server error if you execute member-create, member- delete, and member-create again

2015-06-09 Thread xiao pei liu
Public bug reported:

1. Source with user: haozd, tenant Sol-Test.
2. Upload and image :glance image-create --disk-format qcow2 --container-format 
bare --property architecture=s390x --min-disk 3 --min-ram 1024 --is-public 
False --file rhel7_small_cloudinit.qcow2 --name rhel7_sharetest
3. Open horizon create 1 new tenant "holly-share" with no user .
4  Create a new user  "holly" and add it to tenant "holly-share". 
https://9.12.35.142/project/images/
4. Run cmd to add-share the image to tenant "holly-share".glance member-create 
--can-share 52d5d42c-a9bb-40eb-a143-2f0e92559285 
9f96d45084f347b9bd2dd24cc02a8b5c
5. Log in with holly, check the image "rhel7_sharetest" is under "shared with 
me" tab on image-list page.
6. Run cmd to remove the shared image from the tenant.  glance member-delete 
52d5d42c-a9bb-40eb-a143-2f0e92559285 9f96d45084f347b9bd2dd24cc02a8b5c
7.  Log in with holly, check the "shared with me" tab on image-list page is 
empty.
8. Run cmd to add-share the image to tenant "holly-share" again. Cause Error.
[root@xoc02u28 ˜]# glance member-create 52d5d42c-a9bb-40eb-a143-2f0e92559285 
9f96d45084f347b9bd2dd24cc02a8b5c
HTTPInternalServerError (HTTP 500)
9.Check log under /var/log/glance/api.log
2015-06-02 03:45:53.941 6225 ERROR glance.registry.client.v1.client 
[req-4e7e378f-1980-4382-95f1-c8da30ec53c6 9c8dfcd7ce4c403d8d66f7fbb41c323f 
a0bfd24eb059416cad239e93aa0474a1 - - -] Registry client request PUT 
/images/52d5d42c-a9bb-40eb-a143-2f0e92559285/members/9f96d45084f347b9bd2dd24cc02a8b5c
 raised ServerError
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client Traceback 
(most recent call last):
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/registry/client/v1/client.py", line 
117, in do_request
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client 
**kwargs)
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 71, in wrapped
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client return 
func(self, *args, **kwargs)
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 376, in 
do_request
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client 
headers=copy.deepcopy(headers))
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 88, in wrapped
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client return 
func(self, method, url, body, headers)
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 534, in 
_do_request
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client raise 
exception.ServerError()
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client 
ServerError: The request returned 500 Internal Server Error.
2015-06-02 03:45:53.941 6225 TRACE glance.registry.client.v1.client

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1463310

Title:
  image share to tenant cause internal server error if you execute
  member-create, member- delete, and member-create again

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  1. Source with user: haozd, tenant Sol-Test.
  2. Upload and image :glance image-create --disk-format qcow2 
--container-format bare --property architecture=s390x --min-disk 3 --min-ram 
1024 --is-public False --file rhel7_small_cloudinit.qcow2 --name rhel7_sharetest
  3. Open horizon create 1 new tenant "holly-share" with no user .
  4  Create a new user  "holly" and add it to tenant "holly-share". 
https://9.12.35.142/project/images/
  4. Run cmd to add-share the image to tenant "holly-share".glance 
member-create --can-share 52d5d42c-a9bb-40eb-a143-2f0e92559285 
9f96d45084f347b9bd2dd24cc02a8b5c
  5. Log in with holly, check the image "rhel7_sharetest" is under "shared with 
me" tab on image-list page.
  6. Run cmd to remove the shared image from the tenant.  glance member-delete 
52d5d42c-a9bb-40eb-a143-2f0e92559285 9f96d45084f347b9bd2dd24cc02a8b5c
  7.  Log in with holly, check the "shared with me" tab on image-list page is 
empty.
  8. Run cmd to add-share the image to tenant "holly-share" again. Cause Error.
  [root@xoc02u28 ˜]# glance member-create 52d5d42c-a9bb-40eb-a143-2f0e92559285 
9f96d45084f347b9bd2dd24cc02a8b5c
  HTTPInternalServerError (HTTP 500)
  9.Check log under /var/log/glance/api.log
  2015-06-02 03:45:53.941 6225 ERROR glance.registry.client.v1.client 
[req-4e7e378f-1980-4382-95f1-c8da30ec53c6 9c8dfcd7ce4c403d8d66f7fbb41c323f 

[Yahoo-eng-team] [Bug 1463312] [NEW] wrong url admin instance detail page

2015-06-09 Thread Masco Kaliyamoorthy
Public bug reported:

admin instance detail page having wrong url for image detail page.
It is referring to project image detail page.

steps to reproduce:

1. log in as admin user
2. go to any instance detail page.
3. click on the image detail link.

expected behavior:
it should go to image detail page in admin panel

actual behavior:
it is going to image detail in project panel

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463312

Title:
  wrong url admin instance detail page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  admin instance detail page having wrong url for image detail page.
  It is referring to project image detail page.

  steps to reproduce:

  1. log in as admin user
  2. go to any instance detail page.
  3. click on the image detail link.

  expected behavior:
  it should go to image detail page in admin panel

  actual behavior:
  it is going to image detail in project panel

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463331] [NEW] ipset set can't be destroyed if related security group member is empty

2015-06-09 Thread shihanzhang
Public bug reported:

if a security group A has a rule that allow security group B access, the
member of  security group B is empty, then I delete this rule which
allow security group B access, I find that the ipset set in compute node
does not be destroyed.

reproduce steps:
1. create security group A and B
2. create a rule for A that allow security group B access
3. create a VM in create security group A
4. delete this rule which allow security group B access

I find the ipset set in compute node does not be destroyed.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463331

Title:
  ipset set can't be destroyed if  related security group member is
  empty

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  if a security group A has a rule that allow security group B access,
  the member of  security group B is empty, then I delete this rule
  which  allow security group B access, I find that the ipset set in
  compute node does not be destroyed.

  reproduce steps:
  1. create security group A and B
  2. create a rule for A that allow security group B access
  3. create a VM in create security group A
  4. delete this rule which allow security group B access

  I find the ipset set in compute node does not be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463363] [NEW] NSX-mh: Decimal RXTX factor not honoured

2015-06-09 Thread Salvatore Orlando
Public bug reported:

A decimal RXTX factor, which is allowed by nova flavors, is not honoured
by the NSX-mh plugin, but simply truncated to integer.

To reproduce:

* Create a neutron queue
* Create a neutron net / subnet using the queue
* Create a new flavor which uses an RXTX factor other than an integer value
* Boot a VM on the net above using the flavor
* View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: vmware-nsx
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463363

Title:
  NSX-mh: Decimal RXTX factor not honoured

Status in OpenStack Neutron (virtual network service):
  New
Status in VMware NSX:
  New

Bug description:
  A decimal RXTX factor, which is allowed by nova flavors, is not
  honoured by the NSX-mh plugin, but simply truncated to integer.

  To reproduce:

  * Create a neutron queue
  * Create a neutron net / subnet using the queue
  * Create a new flavor which uses an RXTX factor other than an integer value
  * Boot a VM on the net above using the flavor
  * View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463372] [NEW] nova secgroup-list-rules shows empty table

2015-06-09 Thread Alex Syafeyev
Public bug reported:

We see no secgroups rules with nova command- 
We should see the existing  rules even with nova command, Specially if we see 
the rules in GUI via COMPUTE tab. 

1. see security groups with

neutron security-group-rule-list

2. see security groups with nova command

nova secgroup-list-rules GROUPID

nova secgroup-list-rules 54db0a3c-fc5d-4faf-8b1a
+-+---+-+--+--+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-+---+-+--+--+
| |   | |  | default  |
| |   | |  | default  |
+-+---+-+--+---+


neutron security-group-rule-list 
+--++---+---+---+-+
| id   | security_group | direction | ethertype 
| protocol/port | remote  |
+--++---+---+---+-+
| 0e1cdfae-38d6-4d58-b624-011c2c05e165 | default| ingress   | IPv6  
| any   | default (group) |
| 13c64385-ac4c-4321-bd3f-ec3e0ca939e1 | default| ingress   | IPv4  
| any   | default (group) |
| 261ae2ec-686c-4e53-9578-1f55d92e280d | default| egress| IPv4  
| any   | any |
| 41071f04-db2c-4e36-b5f0-8da2331e0382 | sec_group  | egress| IPv4  
| icmp  | any |
| 45639c5d-cf4d-4231-a462-b180b9e52eaf | default| egress| IPv6  
| any   | any |
| 5bab336e-410f-4323-865a-eeafee3fc3eb | sec_group  | ingress   | IPv4  
| icmp  | any |
| 5e0cb33f-0a3c-41f8-8562-a549163d655e | sec_group  | egress| IPv6  
| any   | any |
| 67409c83-3b62-4ba5-9e0d-93b23a81722a | default| egress| IPv4  
| any   | any |
| 82676e25-f37c-4c57-9f7e-ffbe481501b5 | sec_group  | egress| IPv4  
| any   | any |
| 89c232f4-ec90-46ba-989f-87d7348a9ea9 | default| ingress   | IPv4  
| any   | default (group) |
| ad50904e-3cd4-43e2-9ab4-c7cb5277cc4d | sec_group  | egress| IPv4  
| 1-65535/tcp   | any |
| c3386b79-06a8-4609-8db7-2924e092e5e9 | default| egress| IPv6  
| any   | any |
| c37fe4d0-01b4-40f9-a069-15c8f3edffe4 | default| egress| IPv6  
| any   | any |
| c51371f1-d3ae-4223-a044-f7b9b2eeb8a1 | sec_group  | ingress   | IPv4  
| 1-65535/udp   | any |
| d3d6c1b3-bde5-45ce-a950-5bfd0fc7fc5c | default| ingress   | IPv6  
| any   | default (group) |
| d4888c02-0b56-412e-bf02-dfd27ce84580 | sec_group  | egress| IPv4  
| 1-65535/udp   | any |
| d7e0aee8-eee4-4ca1-b67e-ec4864a71492 | default| ingress   | IPv4  
| any   | default (group) |
| df6504e5-0adb-411a-9313-4bad7074c42e | default| ingress   | IPv6  
| any   | default (group) |
| e0ef6e04-575b-43ed-8179-c221d1e4f962 | default| egress| IPv4  
| any   | any |
| e828f2ef-518f-4c67-a328-6dafc16431b9 | sec_group  | ingress   | IPv4  
| 1-65535/tcp   | any |
+--++---+---+---+-+


Kilo+rhel7.1
python-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
python-neutronclient-2.4.0-1.el7ost.noarch
openstack-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
openstack-neutron-common-2015.1.0-1.el7ost.noarch
python-neutron-lbaas-2015.1.0-3.el7ost.noarch
python-neutron-fwaas-2015.1.0-3.el7ost.noarch


openstack-nova-common-2015.1.0-4.el7ost.noarch
openstack-nova-cert-2015.1.0-4.el7ost.noarch
openstack-nova-compute-2015.1.0-4.el7ost.noarch
openstack-nova-console-2015.1.0-4.el7ost.noarch
python-nova-2015.1.0-4.el7ost.noarch
openstack-nova-scheduler-2015.1.0-4.el7ost.noarch
python-novaclient-2.23.0-1.el7ost.noarch
openstack-nova-api-2015.1.0-4.el7ost.noarch
openstack-nova-novncproxy-2015.1.0-4.el7ost.noarch
openstack-nova-conductor-2015.1.0-4.el7ost.noarch

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463372

Title:
   nova secgroup-list-rules shows empty table

Status in OpenStack Compute (Nova):
  New

Bug description:
  We see no secgroups rules with nova command- 
  We should see the existing  rules even with

[Yahoo-eng-team] [Bug 1463373] [NEW] cc_apt_configure does not work with python3

2015-06-09 Thread Nicholas Van Wiggeren
Public bug reported:

The way cc_apt_configure.py writes out the script to fetch GPG keys
breaks when using Python 3.

* There's a constant, http://bazaar.launchpad.net/~cloud-init-dev/cloud-
init/trunk/view/head:/cloudinit/config/cc_apt_configure.py#L39 that
contains the script.

* In getkeybyid, it's written to a temporary file:
http://bazaar.launchpad.net/~cloud-init-dev/cloud-
init/trunk/view/head:/cloudinit/config/cc_apt_configure.py#L113

Python 3 throws an exception: 'str' does not support the buffer
interface when doing the write.

A very simple example:

import tempfile

fh = tempfile.NamedTemporaryFile()
TEST_STR = """ HELLO WORLD """
fh.write(TEST_STR)

Will work with 2, but not 3.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1463373

Title:
  cc_apt_configure does not work with python3

Status in Init scripts for use on cloud images:
  New

Bug description:
  The way cc_apt_configure.py writes out the script to fetch GPG keys
  breaks when using Python 3.

  * There's a constant, http://bazaar.launchpad.net/~cloud-init-dev
  /cloud-init/trunk/view/head:/cloudinit/config/cc_apt_configure.py#L39
  that contains the script.

  * In getkeybyid, it's written to a temporary file:
  http://bazaar.launchpad.net/~cloud-init-dev/cloud-
  init/trunk/view/head:/cloudinit/config/cc_apt_configure.py#L113

  Python 3 throws an exception: 'str' does not support the buffer
  interface when doing the write.

  A very simple example:

  import tempfile

  fh = tempfile.NamedTemporaryFile()
  TEST_STR = """ HELLO WORLD """
  fh.write(TEST_STR)

  Will work with 2, but not 3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1463373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463375] [NEW] Use fanout RPC message to nofity the security group's change

2015-06-09 Thread shihanzhang
Public bug reported:

when a security group members or rules change, if it just notify the l2 agents 
with 'security_groups_member_updated' or 'security_groups_rule_updated', the 
all related l2 agents need to get the security group details through RPC from
neutron-server, when the number of l2 agents is large, the load of 
neutron-server is heavy.
we can use fanout RPC message with the changed sg details to notify the l2 
agents, then l2 agents which has the related devices update the sg information 
in their memory, they do not need to get the sg details through RPC.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463375

Title:
  Use fanout RPC message to nofity the security group's change

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when a security group members or rules change, if it just notify the l2 
agents with 'security_groups_member_updated' or 'security_groups_rule_updated', 
the all related l2 agents need to get the security group details through RPC 
from
  neutron-server, when the number of l2 agents is large, the load of 
neutron-server is heavy.
  we can use fanout RPC message with the changed sg details to notify the l2 
agents, then l2 agents which has the related devices update the sg information 
in their memory, they do not need to get the sg details through RPC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463387] [NEW] quota-class-update api return 500 error if value is above mysql INT type

2015-06-09 Thread Pranali Deore
Public bug reported:

nova quota-class-update api return 500 error if update value is greater
than mysql INT type.

steps to reproduce:

$ nova quota-class-update --instances 2147483648 default
ERROR: The server has either erred or is incapable of performing the requested 
operation. (HTTP 500)

or 
curl -g -i --cacert "/opt/stack/data/CA/int-ca/ca-chain.pem" -X PUT 
http://10.69.4.136:8774/v3/bd00959429ab477f812822ac32638bd7/os-quota-class-sets/default
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 4846614c5c3341f5a1828a987d692d5f" 
-d '{"quota_class_set": {"instances": 2147483648}}'

n-api error logs:

2015-06-09 04:32:40.215 ERROR oslo_db.sqlalchemy.exc_filters 
[req-dbe4c88d-0baa-4a8a-bf1a-3272412723a5 admin admin] DBAPIError exception 
wrapped from (DataE
rror) (1264, "Out of range value for column 'hard_limit' at row 8") 'UPDATE 
quota_classes SET updated_at=%s, hard_limit=%s WHERE quota_classes.deleted = %s
AND quota_classes.class_name = %s AND quota_classes.resource = %s' 
(datetime.datetime(2015, 6, 9, 11, 32, 40, 214756), 2147483648, 0, 'default', 
'instances'
)
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1063, 
in _execu
te_context
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters context)
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
442, in do_e
xecute
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandl
er
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters DataError: (1264, 
"Out of range value for column 'hard_limit' at row 8")
2015-06-09 04:32:40.215 TRACE oslo_db.sqlalchemy.exc_filters
2015-06-09 04:32:40.217 ERROR nova.api.openstack 
[req-dbe4c88d-0baa-4a8a-bf1a-3272412723a5 admin admin] Caught error: 
(DataError) (1264, "Out of range value
 for column 'hard_limit' at row 8") 'UPDATE quota_classes SET updated_at=%s, 
hard_limit=%s WHERE quota_classes.deleted = %s AND quota_classes.class_name = %
s AND quota_classes.resource = %s' (datetime.datetime(2015, 6, 9, 11, 32, 40, 
214756), 2147483648, 0, 'default', 'instances')
2015-06-09 04:32:40.217 TRACE nova.api.openstack Traceback (most recent call 
last):
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 126, in __call__
2015-06-09 04:32:40.217 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
2015-06-09 04:32:40.217 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
2015-06-09 04:32:40.217 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-06-09 04:32:40.217 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 639, in __c
all__
2015-06-09 04:32:40.217 TRACE nova.api.openstack return self._call_app(env, 
start_response)
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 559, in _ca
ll_app
2015-06-09 04:32:40.217 TRACE nova.api.openstack return self._app(env, 
_fake_start_response)

2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-06-09 04:32:40.217 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
2015-06-09 04:32:40.217 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2015-06-09 04:32:40.217 TRACE nova.api.openstack   File 
"/usr/local/lib/

[Yahoo-eng-team] [Bug 1463184] Re: Failing to launch VM if RX_TX*Network Qos is greater than Maximum value is 2147483

2015-06-09 Thread senthilmageswaran
i think the max bandwidth limitation would not exceed 0x7fff(2147483647). 
Since because of your configuration of RX_TX factor for 1GB/s network exceeds 
the max limit the vmware server is throwing error.

Also, in your log 2147483 is truncation of 2147483647


--


** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463184

Title:
  Failing to launch VM if RX_TX*Network Qos is greater than Maximum
  value is 2147483

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Unable to launch VMs when net QOS rate that is RX_TX of flavor
  multiplied by Network QOS max bandwidth value is greater than 2147483.

  ERROR Server Error Message: Invalid bandwidth settings. Maximum value
  is 2147483.

  
  If default bandwidth is 1 GB/s and RX_TX for a flavor is set to 5 it does not 
work and fails while launching instance.

  neutron server log :

  neutron.openstack.common.rpc.com ERROR Failed to publish message to
  topic 'notifications.info': [Errno 32] Broken pipe#012Traceback (most
  recent call last):#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 579, in
  ensure#012return method(*args, **kwargs)#012  File
  "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 690, in
  _publish#012publisher = cls(self.conf, self.channel, topic,
  **kwargs)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 392, in
  __init__#012super(NotifyPublisher, self).__init__(conf, channel,
  topic, **kwargs)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 368, in
  __init__#012**options)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 315, in
  __init__#012self.reconnect(channel)#012  File "/usr/lib/python2.7
  /dist-packages/neutron/openstack/common/rpc/impl_kombu.py", line 395,
  in reconnect#012super(NotifyPublisher,
  self).reconnect(channel)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 323, in
  reconnect#012routing_key=self.routing_key)#012  File
  "/usr/lib/python2.7/dist-packages/kombu/messaging.py", line 82, in
  __init__#012self.revive(self._channel)#012  File
  "/usr/lib/python2.7/dist-packages/kombu/messaging.py", line 216, in
  revive#012self.declare()#012  File "/usr/lib/python2.7/dist-
  packages/kombu/messaging.py", line 102, in declare#012
  self.exchange.declare()#012  File "/usr/lib/python2.7/dist-
  packages/kombu/entity.py", line 166, in declare#012nowait=nowait,
  passive=passive,#012  File "/usr/lib/python2.7/dist-
  packages/amqp/channel.py", line 604, in exchange_declare#012
  self._send_method((40, 10), args)#012  File "/usr/lib/python2.7/dist-
  packages/amqp/abstract_channel.py", line 62, in _send_method#012
  self.channel_id, method_sig, a

  neutron.plugins.vmware.api_client log :

  ERROR Server Error Message: Invalid bandwidth settings. Maximum value
  is 2147483.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463395] [NEW] Connection refused when Cloudstack get password

2015-06-09 Thread Tomcsi
Public bug reported:

Hi,

Get "Connection refused" when cloud-init get password from Cloudstack. I
have a workaround:

class CloudStackPasswordServerClient(object):
"""
Implements password fetching from the CloudStack password server.


http://cloudstack-administration.readthedocs.org/en/latest/templates.html#adding-password-management-to-your-templates
has documentation about the system.  This implementation is following that
found at

https://github.com/shankerbalan/cloudstack-scripts/blob/master/cloud-set-guest-password-debian

The CloudStack password server is, essentially, a broken HTTP
server. It requires us to provide a valid HTTP request (including a
DomU_Request header, which is the meat of the request), but just
writes the text of its response on to the socket, without a status
line or any HTTP headers.  This makes HTTP libraries sad, which
explains the screwiness of the implementation of this class.

This should be fixed in CloudStack by commit
a72f14ea9cb832faaac946b3cf9f56856b50142a in December 2014.
"""

def __init__(self, virtual_router_address):
self.virtual_router_address = virtual_router_address

def _do_request(self, domu_request):
# We have to provide a valid HTTP request, but a valid HTTP
# response is not returned. This means that getresponse() chokes,
# so we use the socket directly to read off the response.
# Because we're reading off the socket directly, we can't re-use the
# connection.
conn = http_client.HTTPConnection(self.virtual_router_address, 8080)
try:
conn.request('GET', '', headers={'DomU_Request': domu_request})
conn.sock.settimeout(30)
output = conn.sock.recv(1024).decode('utf-8').strip()
finally:
conn.close()
return output

def get_password(self):
password = self._do_request('send_my_password')
if password in ['', 'saved_password']:
return None
if password == 'bad_request':
raise RuntimeError('Error when attempting to fetch root password.')
time.sleep(1)   
 < If put the sleep(1), the second http request success
self._do_request('saved_password')
return password

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1463395

Title:
  Connection refused when Cloudstack get password

Status in Init scripts for use on cloud images:
  New

Bug description:
  Hi,

  Get "Connection refused" when cloud-init get password from Cloudstack.
  I have a workaround:

  class CloudStackPasswordServerClient(object):
  """
  Implements password fetching from the CloudStack password server.

  
http://cloudstack-administration.readthedocs.org/en/latest/templates.html#adding-password-management-to-your-templates
  has documentation about the system.  This implementation is following that
  found at
  
https://github.com/shankerbalan/cloudstack-scripts/blob/master/cloud-set-guest-password-debian

  The CloudStack password server is, essentially, a broken HTTP
  server. It requires us to provide a valid HTTP request (including a
  DomU_Request header, which is the meat of the request), but just
  writes the text of its response on to the socket, without a status
  line or any HTTP headers.  This makes HTTP libraries sad, which
  explains the screwiness of the implementation of this class.

  This should be fixed in CloudStack by commit
  a72f14ea9cb832faaac946b3cf9f56856b50142a in December 2014.
  """

  def __init__(self, virtual_router_address):
  self.virtual_router_address = virtual_router_address

  def _do_request(self, domu_request):
  # We have to provide a valid HTTP request, but a valid HTTP
  # response is not returned. This means that getresponse() chokes,
  # so we use the socket directly to read off the response.
  # Because we're reading off the socket directly, we can't re-use the
  # connection.
  conn = http_client.HTTPConnection(self.virtual_router_address, 8080)
  try:
  conn.request('GET', '', headers={'DomU_Request': domu_request})
  conn.sock.settimeout(30)
  output = conn.sock.recv(1024).decode('utf-8').strip()
  finally:
  conn.close()
  return output

  def get_password(self):
  password = self._do_request('send_my_password')
  if password in ['', 'saved_password']:
  return None
  if password == 'bad_request':
  raise RuntimeError('Error when attempting to fetch root 
password.')
  time.sleep(1)

[Yahoo-eng-team] [Bug 1463396] [NEW] quota_class_update with class_id longer than 255 returns 500 error

2015-06-09 Thread Pranali Deore
Public bug reported:

nova quota_class_update returns 500 error if you pass class parameter
with more than 255 characters

steps to reproduce:

$ nova quota_class_update --instances 2
aa

The server has either erred or is incapable of performing the requested
operation. (HTTP 500) (Request-ID: req-
3f51a5c8-ccfc-4675-b224-a5cf94f0172d)

n-api error logs:

2015-06-09 05:51:19.383 ERROR nova.api.openstack 
[req-3f51a5c8-ccfc-4675-b224-a5cf94f0172d admin admin] Caught error: 
(DataError) (1406, "Data too long for
column 'class_name' at row 1") 'INSERT INTO quota_classes (created_at, 
updated_at, deleted_at, deleted, class_name, resource, hard_limit) VALUES (%s, 
%s, %s
, %s, %s, %s, %s)' (datetime.datetime(2015, 6, 9, 12, 51, 19, 380807), None, 
None, 0, 'a

a', 'instances', 7)
2015-06-09 05:51:19.383 TRACE nova.api.openstack Traceback (most recent call 
last):
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 126, in __call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
2015-06-09 05:51:19.383 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
2015-06-09 05:51:19.383 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 639, in __c
all__
2015-06-09 05:51:19.383 TRACE nova.api.openstack return self._call_app(env, 
start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 559, in _ca
ll_app
2015-06-09 05:51:19.383 TRACE nova.api.openstack return self._app(env, 
_fake_start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2015-06-09 05:51:19.383 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 756, in __call__
2015-06-09 05:51:19.383 TRACE nova.api.openstack content_type, body, accept)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack
2015-06-09 05:51:19.383 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 911, in dispatch
2015-06-09 05:51:19.383 TRACE nova.api.openstack return method(req=request, 
**action_args)
2015-06-09 05:51:19.383 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/

[Yahoo-eng-team] [Bug 1463405] [NEW] Keypair tests failing on Fedora 22

2015-06-09 Thread Artom Lifshitz
Public bug reported:

To use an example, test_keypairs_post is failing on with:

NoMatch: Values do not match:
Template: ^([0-9a-f]{2}:){15}[0-9a-f]{2}$
Response: SHA256:4x+7c/6RKxVtSbQrFIRRH14GQzAgkRmbNd9QfsU/dXk

This happens on Fedora 22 because it introduced the following option to
ssh-keygen:

 -E fingerprint_hash
 Specifies the hash algorithm used when displaying key 
fingerprints.  Valid
 options are: “md5” and “sha256”.  The default is “sha256”.

I'm guessing that the default used to be MD5 with no possibility to
change it (which is what Nova expects), and now it's SHA256.

** Affects: nova
 Importance: Undecided
 Assignee: Artom Lifshitz (notartom)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Artom Lifshitz (notartom)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463405

Title:
  Keypair tests failing on Fedora 22

Status in OpenStack Compute (Nova):
  New

Bug description:
  To use an example, test_keypairs_post is failing on with:

  NoMatch: Values do not match:
  Template: ^([0-9a-f]{2}:){15}[0-9a-f]{2}$
  Response: SHA256:4x+7c/6RKxVtSbQrFIRRH14GQzAgkRmbNd9QfsU/dXk

  This happens on Fedora 22 because it introduced the following option
  to ssh-keygen:

   -E fingerprint_hash
   Specifies the hash algorithm used when displaying key 
fingerprints.  Valid
   options are: “md5” and “sha256”.  The default is “sha256”.

  I'm guessing that the default used to be MD5 with no possibility to
  change it (which is what Nova expects), and now it's SHA256.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420125] Re: href variables for federation controller are inconsistent

2015-06-09 Thread Morgan Fainberg
This cannot be fixed with the API contract we have. This will change the
public interfaces. We cannot make changes to the public interfaces in
this regard.

Marking as "wont fix"

** Changed in: keystone
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420125

Title:
  href variables for federation controller are inconsistent

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  For the most part, the href variables seen in JSON home requests for
  federation resources are consistent,
  
https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/routers.py
  they are usually idp_id, sp_id, protocol_id and mapping_id.

  Except for the following block:

path=self._construct_url('identity_providers/{identity_provider}/'
'protocols/{protocol}/auth'),

  Where 'identity_provider' and 'protocol' are used instead of 'idp_id'
  and 'protocol_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453068] Re: task: Image's locations empty after importing to store

2015-06-09 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
 Assignee: (unassigned) => Flavio Percoco (flaper87)

** Changed in: glance/kilo
   Importance: Undecided => High

** Changed in: glance/kilo
   Status: New => Incomplete

** Changed in: glance/kilo
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1453068

Title:
  task: Image's locations empty after importing to store

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  Confirmed

Bug description:
  The ImportToStore task is setting the image data correctly but not
  saving it after it's been imported. This causes the image's location
  to be lost and therefore the image is completely useless because there
  are no locations associated to it

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1453068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452750] Re: dest_file in task convert is wrong

2015-06-09 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
   Importance: Undecided => High

** Changed in: glance/kilo
   Status: New => Confirmed

** Changed in: glance/kilo
 Assignee: (unassigned) => Flavio Percoco (flaper87)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1452750

Title:
  dest_file in task convert is wrong

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  Confirmed

Bug description:
  
https://github.com/openstack/glance/commit/1b144f4c12fd6da58d7c48348bf7bab5873388e9
  #diff-f02c20aafcce326b4d31c938376f6c2aR78 -> head -> desk

  The dest_path is not formated correctly, which ends up in converting
  the image to a path that is completely ignored by other tasks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1452750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419823] Re: Nullable image description crashes v2 client

2015-06-09 Thread Flavio Percoco
** No longer affects: python-glanceclient

** Changed in: glance
 Assignee: Kamil Rykowski (kamil-rykowski) => Flavio Percoco (flaper87)

** Changed in: glance
Milestone: None => liberty-1

** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
   Importance: Undecided => Critical

** Changed in: glance/kilo
   Status: New => Confirmed

** Changed in: glance/kilo
 Assignee: (unassigned) => Flavio Percoco (flaper87)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419823

Title:
  Nullable image description crashes v2 client

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance kilo series:
  Confirmed

Bug description:
  When you somehow set the image description to None the glanceclient v2
  image-list crashes (as well as image-show, image-update for this
  particular image). The only way to show all images now is to use
  client v1, because it's more stable in this case.

  Steps to reproduce:

  1. Open Horizon and go to the edit page of any image.
  2. Set description to anything eg. "123" and save.
  3. Open image edit page again, remove description and save it.
  4. List all images using glanceclient v2: "glance --os-image-api-version 2 
image-list"
  5. Be sad, because of raised exception:

  None is not of type u'string'

  Failed validating u'type' in schema[u'additionalProperties']:
  {u'type': u'string'}

  On instance[u'description']:
  None

  During investigating the issue I've found that the
  additionalProperties schema is set to accept only string values, so it
  should be expanded to allow for null values as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1419823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440773] Re: Remove WritableLogger as eventlet has a real logger interface in 0.17.2

2015-06-09 Thread Doug Hellmann
** Changed in: oslo.log
   Status: Fix Committed => Fix Released

** Changed in: oslo.log
Milestone: None => 1.4.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440773

Title:
  Remove WritableLogger as eventlet has a real logger interface in
  0.17.2

Status in Manila:
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Logging configuration library for OpenStack:
  Fix Released

Bug description:
  Info from Sean on IRC:

  the patch to use a real logger interface in eventlet has been released
  in 0.17.2, which means we should be able to phase out
  https://github.com/openstack/oslo.log/blob/master/oslo_log/loggers.py

  Eventlet PR was:
  https://github.com/eventlet/eventlet/pull/75

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/manila/+bug/1440773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404093] Re: Provide option for *OpportunisticTestCase to fail instead of skip on db error

2015-06-09 Thread Doug Hellmann
** Changed in: oslo.db
   Status: Fix Committed => Fix Released

** Changed in: oslo.db
Milestone: None => 1.11.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404093

Title:
  Provide option for *OpportunisticTestCase to fail instead of skip on
  db error

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in Oslo Database library:
  Fix Released

Bug description:
  Tests using oslo.db.sqlalchemy.test_base.DbFixture will skip if the
  database cannot be provisioned. In the neutron functional job we do
  not want to skip tests. The tests should fail if the environment is
  not set up correctly for the tests.

  After https://review.openstack.org/126175 is merged we should see to
  it that the migrations tests do not skip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459021] Re: nova vmware unit tests failing with oslo.vmware 0.13.0

2015-06-09 Thread Doug Hellmann
** Changed in: oslo.vmware
   Status: Fix Committed => Fix Released

** Changed in: oslo.vmware
Milestone: None => 0.14.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459021

Title:
  nova vmware unit tests failing with oslo.vmware 0.13.0

Status in OpenStack Compute (Nova):
  Fix Committed
Status in Oslo VMware library for OpenStack projects:
  Fix Released

Bug description:
  http://logs.openstack.org/68/184968/2/check/gate-nova-
  python27/e3dadf7/console.html#_2015-05-26_20_45_35_734

  2015-05-26 20:45:35.734 | {4} 
nova.tests.unit.virt.vmwareapi.test_vm_util.VMwareVMUtilTestCase.test_create_vm_invalid_guestid
 [0.058940s] ... FAILED
  2015-05-26 20:45:35.735 | 
  2015-05-26 20:45:35.735 | Captured traceback:
  2015-05-26 20:45:35.735 | ~~~
  2015-05-26 20:45:35.736 | Traceback (most recent call last):
  2015-05-26 20:45:35.736 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  2015-05-26 20:45:35.736 | return func(*args, **keywargs)
  2015-05-26 20:45:35.737 |   File 
"nova/tests/unit/virt/vmwareapi/test_vm_util.py", line 796, in 
test_create_vm_invalid_guestid
  2015-05-26 20:45:35.737 | 'folder', config_spec, 'res-pool')
  2015-05-26 20:45:35.737 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
  2015-05-26 20:45:35.738 | self.assertThat(our_callable, matcher)
  2015-05-26 20:45:35.738 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
  2015-05-26 20:45:35.738 | mismatch_error = self._matchHelper(matchee, 
matcher, message, verbose)
  2015-05-26 20:45:35.738 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 483, in _matchHelper
  2015-05-26 20:45:35.739 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.739 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  2015-05-26 20:45:35.739 | mismatch = 
self.exception_matcher.match(exc_info)
  2015-05-26 20:45:35.740 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  2015-05-26 20:45:35.740 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.740 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 414, in match
  2015-05-26 20:45:35.741 | reraise(*matchee)
  2015-05-26 20:45:35.741 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  2015-05-26 20:45:35.741 | result = matchee()
  2015-05-26 20:45:35.742 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 969, in __call__
  2015-05-26 20:45:35.742 | return self._callable_object(*self._args, 
**self._kwargs)
  2015-05-26 20:45:35.742 |   File "nova/virt/vmwareapi/vm_util.py", line 
1280, in create_vm
  2015-05-26 20:45:35.742 | task_info = 
session._wait_for_task(vm_create_task)
  2015-05-26 20:45:35.743 |   File "nova/virt/vmwareapi/driver.py", line 
714, in _wait_for_task
  2015-05-26 20:45:35.743 | return self.wait_for_task(task_ref)
  2015-05-26 20:45:35.743 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py",
 line 381, in wait_for_task
  2015-05-26 20:45:35.744 | return evt.wait()
  2015-05-26 20:45:35.744 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/event.py",
 line 121, in wait
  2015-05-26 20:45:35.744 | return hubs.get_hub().switch()
  2015-05-26 20:45:35.745 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
  2015-05-26 20:45:35.745 | return self.greenlet.switch()
  2015-05-26 20:45:35.745 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py",
 line 76, in _inner
  2015-05-26 20:45:35.745 | self.f(*self.args, **self.kw)
  2015-05-26 20:45:35.746 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py",
 line 423, in _poll_task
  2015-05-26 20:45:35.746 | raise task_ex
  2015

[Yahoo-eng-team] [Bug 1463466] [NEW] Option use_user_token is created twice

2015-06-09 Thread Mike Fedosin
Public bug reported:

In glance we have two places where we register use_user_token option:
https://github.com/openstack/glance/blob/stable/kilo/glance/common/store_utils.py#L33
https://github.com/openstack/glance/blob/stable/kilo/glance/registry/client/__init__.py#L55

oslo.config considers them as one, because they have the same name and
help string, but changing help string in one of them leads to an
exception DuplicateOptError: duplicate option: use_user_token

It seems that we should remove the option creation in store_utils and
left only one declaration in registry client.

** Affects: glance
 Importance: Undecided
 Assignee: Mike Fedosin (mfedosin)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1463466

Title:
  Option use_user_token is created twice

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  In glance we have two places where we register use_user_token option:
  
https://github.com/openstack/glance/blob/stable/kilo/glance/common/store_utils.py#L33
  
https://github.com/openstack/glance/blob/stable/kilo/glance/registry/client/__init__.py#L55

  oslo.config considers them as one, because they have the same name and
  help string, but changing help string in one of them leads to an
  exception DuplicateOptError: duplicate option: use_user_token

  It seems that we should remove the option creation in store_utils and
  left only one declaration in registry client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1463466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357476] Re: Timeout waiting for vif plugging callback for instance

2015-06-09 Thread jishnub
I have hit this bug repeatedly on creating new instance on a compute-node on 
which already 4 instances exist.
Resource-wiseless than 10% of the hypervisor has been used by the existing 
instances.
Any new instance creation is failing at the same place as per the log trace.  I 
have attached the log for my instance named 'AGAIN'

Below is the log snippet:
===

2015-06-09 08:03:47.153 19692 WARNING nova.virt.libvirt.driver [-] Timeout 
waiting for vif plugging callback for instance 
b8c3a1dc-8780-4495-8a7e-3f03f14c8475
2015-06-09 08:03:47.363 19692 DEBUG nova.virt.libvirt.vif [-] vif_type=ovs 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=True,config_drive='',created_at=2015-06-09T14:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='AGAIN',display_name='AGAIN',ephemeral_gb=0,ephemeral_key_uuid=None,fault=,host='f3-compute-2.noiro.lab',hostname='again',id=77,image_ref='82cd7cee-ade0-43b2-94ac-b7cb8c0c941b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=None,launched_on='f3-compute-2.noiro.lab',locked=False,locked_by=None,memory_mb=4096,metadata={},node='f3-compute-2.noiro.lab',numa_topology=None,os_type=None,pci_devices=PciDeviceList,power_state=0,progress=0,project_id='26acdadf476e49e78172ef6f0595f9a1',ramdisk_id='',reservation_id='r-51nqu02g',root_d
 
evice_name='/dev/vda',root_gb=40,scheduled_at=None,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref='82cd7cee-ade0-43b2-94ac-b7cb8c0c941b',image_container_format='bare',image_disk_format='qcow2',image_min_disk='40',image_min_ram='0',instance_type_ephemeral_gb='0',instance_type_flavorid='3',instance_type_id='1',instance_type_memory_mb='4096',instance_type_name='m1.medium',instance_type_root_gb='40',instance_type_rxtx_factor='1.0',instance_type_swap='0',instance_type_vcpu_weight=None,instance_type_vcpus='2',network_allocated='True'},task_state='spawning',terminated_at=None,updated_at=2015-06-09T14:58:44Z,user_data=None,user_id='3fd8184442334332be33cdfe5b57b1ae',uuid=b8c3a1dc-8780-4495-8a7e-3f03f14c8475,vcpus=2,vm_mode=None,vm_state='building')
 vif=VIF({'profile': {}, 'ovs_interfaceid': 
u'8d51f6bd-6a8b-43a0-9d86-b8384225eab5', 'network': Network({'bridge': 
'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 
'type': 'f
 ixed', 'floating_ips': [], 'address': u'5.5.5.7'})], 'version': 4, 'meta': 
{'dhcp_server': u'5.5.5.3'}, 'dns': [], 'routes': [Route({'interface': None, 
'cidr': u'5.5.5.0/24', 'meta': {}, 'gateway': IP({'meta': {}, 'version': 4, 
'type': 'gateway', 'address': u'5.5.5.1'})})], 'cidr': u'5.5.5.0/28', 
'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': 
u'5.5.5.1'})})], 'meta': {'injected': False, 'tenant_id': 
u'26acdadf476e49e78172ef6f0595f9a1'}, 'id': 
u'4659e62b-f4fd-43ac-b18a-5ef30fac028e', 'label': 
u'l2p_demo_same_ptg_l2p_l3p_bd'}), 'devname': u'tap8d51f6bd-6a', 'vnic_type': 
u'normal', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': False, 
u'ovs_hybrid_plug': False}, 'address': u'fa:16:3e:61:6a:e8', 'active': False, 
'type': u'ovs', 'id': u'8d51f6bd-6a8b-43a0-9d86-b8384225eab5', 'qbg_params': 
None}) unplug /usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py:688
2015-06-09 08:03:47.366 19692 DEBUG nova.virt.driver [-] Emitting event 
 
Stopped> emit_event /usr/lib/python2.7/site-packages/nova/virt/driver.py:1299
2015-06-09 08:03:47.366 19692 INFO nova.compute.manager [-] [instance: 
b8c3a1dc-8780-4495-8a7e-3f03f14c8475] VM Stopped (Lifecycle Event)
2015-06-09 08:03:47.401 19692 DEBUG nova.compute.manager [-] [instance: 
b8c3a1dc-8780-4495-8a7e-3f03f14c8475] Synchronizing instance power state after 
lifecycle event "Stopped"; current vm_state: building, current task_state: 
spawning, current DB power_state: 0, VM power_state: 4 handle_lifecycle_event 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:1150
2015-06-09 08:03:47.405 19692 DEBUG nova.objects.instance [-] Lazy-loading 
`system_metadata' on Instance uuid b8c3a1dc-8780-4495-8a7e-3f03f14c8475 
obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:579
2015-06-09 08:03:47.438 19692 INFO nova.compute.manager [-] [instance: 
b8c3a1dc-8780-4495-8a7e-3f03f14c8475] During sync_power_state the instance has 
a pending task (spawning). Skip.
2015-06-09 08:03:47.444 19692 DEBUG nova.openstack.common.processutils [-] 
Running cmd (subprocess): mv 
/var/lib/nova/instances/b8c3a1dc-8780-4495-8a7e-3f03f14c8475 
/var/lib/nova/instances/b8c3a1dc-8780-4495-8a7e-3f03f14c8475_del execute 
/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161
2015-06-09 08:03:47.453 19692 DEBUG nova.openstack.common.processutils [-] 
Result was 0 execute 
/usr/lib/python2.7/site-packages/nova/openstac

[Yahoo-eng-team] [Bug 1463184] Re: Failing to launch VM if RX_TX*Network Qos is greater than Maximum value is 2147483

2015-06-09 Thread Salvatore Orlando
Triaging

** Changed in: neutron
 Assignee: (unassigned) => Salvatore Orlando (salvatore-orlando)

** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Changed in: vmware-nsx
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463184

Title:
  Failing to launch VM if RX_TX*Network Qos is greater than Maximum
  value is 2147483

Status in VMware NSX:
  Incomplete

Bug description:
  Unable to launch VMs when net QOS rate that is RX_TX of flavor
  multiplied by Network QOS max bandwidth value is greater than 2147483.

  ERROR Server Error Message: Invalid bandwidth settings. Maximum value
  is 2147483.

  
  If default bandwidth is 1 GB/s and RX_TX for a flavor is set to 5 it does not 
work and fails while launching instance.

  neutron server log :

  neutron.openstack.common.rpc.com ERROR Failed to publish message to
  topic 'notifications.info': [Errno 32] Broken pipe#012Traceback (most
  recent call last):#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 579, in
  ensure#012return method(*args, **kwargs)#012  File
  "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 690, in
  _publish#012publisher = cls(self.conf, self.channel, topic,
  **kwargs)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 392, in
  __init__#012super(NotifyPublisher, self).__init__(conf, channel,
  topic, **kwargs)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 368, in
  __init__#012**options)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 315, in
  __init__#012self.reconnect(channel)#012  File "/usr/lib/python2.7
  /dist-packages/neutron/openstack/common/rpc/impl_kombu.py", line 395,
  in reconnect#012super(NotifyPublisher,
  self).reconnect(channel)#012  File "/usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py", line 323, in
  reconnect#012routing_key=self.routing_key)#012  File
  "/usr/lib/python2.7/dist-packages/kombu/messaging.py", line 82, in
  __init__#012self.revive(self._channel)#012  File
  "/usr/lib/python2.7/dist-packages/kombu/messaging.py", line 216, in
  revive#012self.declare()#012  File "/usr/lib/python2.7/dist-
  packages/kombu/messaging.py", line 102, in declare#012
  self.exchange.declare()#012  File "/usr/lib/python2.7/dist-
  packages/kombu/entity.py", line 166, in declare#012nowait=nowait,
  passive=passive,#012  File "/usr/lib/python2.7/dist-
  packages/amqp/channel.py", line 604, in exchange_declare#012
  self._send_method((40, 10), args)#012  File "/usr/lib/python2.7/dist-
  packages/amqp/abstract_channel.py", line 62, in _send_method#012
  self.channel_id, method_sig, a

  neutron.plugins.vmware.api_client log :

  ERROR Server Error Message: Invalid bandwidth settings. Maximum value
  is 2147483.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1463184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463090] Re: Neutron QOS does not work with fraction RX_TX flavor value

2015-06-09 Thread Salvatore Orlando
*** This bug is a duplicate of bug 1463363 ***
https://bugs.launchpad.net/bugs/1463363

My apologies I did not notice your bug report as it was targeting
neutron rather than vmware-nsx and I filed another one as this was
discovered independently in internal testing

** This bug has been marked a duplicate of bug 1463363
   NSX-mh: Decimal RXTX factor not honoured

** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463090

Title:
  Neutron QOS does not work with fraction RX_TX flavor value

Status in VMware NSX:
  New

Bug description:
  We have noticed this in Icehouse. Neutron QOS queue does not get
  created with scaled max value when the RX_TX value of nova flavor is a
  fraction e.x. 0.5 or 1.5

  It worked  correctly when we tested with RX_TX value of 5.0. Here we
  saw neutron queue got created with max value of 5120, since the queue
  value that was attached to network is 1024.

  However for fraction values of flavor the neutron queue max value
  stayed the same at 1024.  To reproduce :

  1. Create flavors with appropriate flavor

  2. Create neutron queue with 1024 max value

  3. Create network with queue id for earlier neutron queue

  4. Spin VMs with appropriate flavor

  5. Run neutron queue-list on controller. Check for max value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1463090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463085] Re: Fraction RX_TX value not getting applied to Neutron Queue QOS

2015-06-09 Thread Salvatore Orlando
*** This bug is a duplicate of bug 1463363 ***
https://bugs.launchpad.net/bugs/1463363

** This bug is no longer a duplicate of bug 1463090
   Neutron QOS does not work with fraction RX_TX flavor value
** This bug has been marked a duplicate of bug 1463363
   NSX-mh: Decimal RXTX factor not honoured

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463085

Title:
  Fraction RX_TX value not getting applied to Neutron Queue QOS

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I noticed this in Icehouse using Neutron backed by NSX.

  Not sure if it is related to a similar issue that was fixed :
  https://review.openstack.org/#/c/188560/1

  Following are steps to reproduce it :

  1. Create flavor with RX_TX as 0.25 and 1.5

  2. Create a neutron qos queue with max bandwidth value of 1024

  3. Create a network with queue_id attached

  4. Spin a VM with flavor having RX_TX value of 0.25

  5. Execute neturon queue-list.

  
+--+--+-+--+-+--+-+
  | id   | name | min |  max | 
qos_marking | dscp | default |
  
+--+--+-+--+-+--+-+
  | 1b67eeb5-8568-43da-a7de-4bf4248b874d | qostestqueue |   0 | 1024 | 
untrusted   |0 | False   |
  | efe223ab-c8aa-46bc-bc62-504dba6f7960 | qostestqueue |   0 | 1024 | 
untrusted   |0 | False   |
  
+--+--+-+--+-+--+-+
  
   
  Expected max value was 256 instead we see 1024. Similar issue is seen 
creating flavor with RX_TX of 1.5

  It works for non fractional RX_TX values e.x. for RX_TX we do see a
  neutron queue with max value of 5120

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463501] [NEW] optimize use of $(this) in horizon.tables.js

2015-06-09 Thread Doug Fish
Public bug reported:

As noted in this code review: 
https://review.openstack.org/#/c/189453/2/horizon/static/horizon/js/horizon.tables.js
the use of $(this) multiple times is not optimal and should be corrected.

Wasn't fixed in the code review since it was on a stable branch.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463501

Title:
  optimize use of $(this) in horizon.tables.js

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As noted in this code review: 
https://review.openstack.org/#/c/189453/2/horizon/static/horizon/js/horizon.tables.js
  the use of $(this) multiple times is not optimal and should be corrected.

  Wasn't fixed in the code review since it was on a stable branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419823] Re: Nullable image description crashes v2 client

2015-06-09 Thread Jorge Niedbalski
** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419823

Title:
  Nullable image description crashes v2 client

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  Confirmed
Status in glance package in Ubuntu:
  New

Bug description:
  When you somehow set the image description to None the glanceclient v2
  image-list crashes (as well as image-show, image-update for this
  particular image). The only way to show all images now is to use
  client v1, because it's more stable in this case.

  Steps to reproduce:

  1. Open Horizon and go to the edit page of any image.
  2. Set description to anything eg. "123" and save.
  3. Open image edit page again, remove description and save it.
  4. List all images using glanceclient v2: "glance --os-image-api-version 2 
image-list"
  5. Be sad, because of raised exception:

  None is not of type u'string'

  Failed validating u'type' in schema[u'additionalProperties']:
  {u'type': u'string'}

  On instance[u'description']:
  None

  During investigating the issue I've found that the
  additionalProperties schema is set to accept only string values, so it
  should be expanded to allow for null values as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1419823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443798] Re: Restrict netmask of CIDR to avoid DHCP resync

2015-06-09 Thread Tristan Cacqueray
Thanks Salvatore for the detail.

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443798

Title:
  Restrict netmask of CIDR to avoid DHCP resync

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Incomplete
Status in neutron juno series:
  Incomplete
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  If any tenant creates a subnet with a netmask of 31 or 32 in IPv4,
  IP addresses of network will fail to be generated, and that
  will cause constant resyncs and neutron-dhcp-agent malfunction.

  [Example operation 1]
   - Create subnet from CLI, with CIDR /31 (CIDR /32 has the same result).

  $ neutron subnet-create net 192.168.0.0/31 --name sub
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 192.168.0.0/31   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 192.168.0.1  |
  | host_routes   |  |
  | id| 42a91f59-1c2d-4e33-9033-4691069c5e4b |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | sub  |
  | network_id| 65cc6b46-17ec-41a8-9fe4-5bf93fc25d1e |
  | subnetpool_id |  |
  | tenant_id | 4ffb89e718d346b48fdce2ac61537bce |
  +---+--+

  [Example operation 2]
   - Create subnet from API, with cidr /32 (CIDR /31 has the same result).

  $ curl -i -X POST -H "content-type:application/json" -d '{"subnet": { "name": 
"badsub", "cidr" : "192.168.0.0/32", "ip_version": 4, "network_id": "8
  8143cda-5fe7-45b6-9245-b1e8b75d28d8"}}' -H "x-auth-token:$TOKEN" 
http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-4e7e74c0-0190-4a69-a9eb-93d545e8aeef
  Date: Thu, 16 Apr 2015 19:21:20 GMT

  {"subnet": {"name": "badsub", "enable_dhcp": true, "network_id":
  "88143cda-5fe7-45b6-9245-b1e8b75d28d8", "tenant_id":
  "4ffb89e718d346b48fdce2ac61537bce", "dns_nameservers": [],
  "gateway_ip": "192.168.0.1", "ipv6_ra_mode": null, "allocation_pools":
  [], "host_routes": [], "ip_version": 4, "ipv6_address_mode": null,
  "cidr": "192.168.0.0/32", "id": "d210d5fd-8b3b-4c0e-b5ad-
  41798bd47d97", "subnetpool_id": null}}

  [Example operation 3]
   - Create subnet from API, with empty allocation_pools.

  $ curl -i -X POST -H "content-type:application/json" -d '{"subnet": { "name": 
"badsub", "cidr" : "192.168.0.0/24", "allocation_pools": [], "ip_version": 4, 
"network_id": "88143cda-5fe7-45b6-9245-b1e8b75d28d8"}}' -H 
"x-auth-token:$TOKEN" http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-54ce81db-b586-4887-b60b-8776a2ebdb4e
  Date: Thu, 16 Apr 2015 19:18:21 GMT

  {"subnet": {"name": "badsub", "enable_dhcp": true, "network_id":
  "88143cda-5fe7-45b6-9245-b1e8b75d28d8", "tenant_id":
  "4ffb89e718d346b48fdce2ac61537bce", "dns_nameservers": [],
  "gateway_ip": "192.168.0.1", "ipv6_ra_mode": null, "allocation_pools":
  [], "host_routes": [], "ip_version": 4, "ipv6_address_mode": null,
  "cidr": "192.168.0.0/24", "id": "abc2dca4-bf8b-46f5-af1a-
  0a1049309854", "subnetpool_id": null}}

  [Trace log]
  2015-04-17 04:23:27.907 16641 DEBUG oslo_messaging._drivers.amqp [-] 
UNIQUE_ID is e0a6a81a005d4aa0b40130506afa0267. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-17 04:23:27.979 16641 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 88143cda-5fe7-45b6-9245-b1e8b75d28d8.
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 112, in call_driver
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 201, in enable
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agen

[Yahoo-eng-team] [Bug 1463523] [NEW] More gracefully address IPv6 arping issues

2015-06-09 Thread Matt Kassawara
Public bug reported:

Augment the following patch to more gracefully address IPv6 arping
issues (perhaps fix callers):

https://review.openstack.org/#/c/160799

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463523

Title:
  More gracefully address IPv6 arping issues

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Augment the following patch to more gracefully address IPv6 arping
  issues (perhaps fix callers):

  https://review.openstack.org/#/c/160799

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463525] [NEW] There is no volume encryption metadata for rbd-backed volumes

2015-06-09 Thread Matt Riedemann
Public bug reported:

This came up as a discussion point in the nova IRC channel today because
someone was talking about adding encryption support to Ceph in Nova and
I pointed out that there is already a ceph job that runs the tempest
luks/cryptsetup encrypted volume tests successfully, so why aren't those
failing if it's not supported today?

We got looking at the code and logs and found that when nova tries to
get volume encryption metadata from cinder for rbd-backed instances,
nothing comes back so nova isn't doing anything with volume encryption
using it's providers (luks / cryptsetup).

Change https://review.openstack.org/#/c/189799/ in nova adds logging to
see this:

Confirmed that for LVM backed Cinder we get something back:

http://logs.openstack.org/99/189799/2/check/check-tempest-dsvm-
full/c3ee602/logs/screen-n-cpu.txt.gz#_2015-06-09_18_18_18_078

For Ceph we don't:

http://logs.openstack.org/99/189799/2/check/check-tempest-dsvm-full-
ceph/353db23/logs/screen-n-cpu.txt.gz#_2015-06-09_18_21_16_723

This might be working as designed, I'm not sure, but I'm opening the bug
to track the effort since if you think you have encrypted volumes when
using ceph and nova you're probably not, so there is a false sense of
security here which is a bug.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ceph volumes

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463525

Title:
  There is no volume encryption metadata for rbd-backed volumes

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  This came up as a discussion point in the nova IRC channel today
  because someone was talking about adding encryption support to Ceph in
  Nova and I pointed out that there is already a ceph job that runs the
  tempest luks/cryptsetup encrypted volume tests successfully, so why
  aren't those failing if it's not supported today?

  We got looking at the code and logs and found that when nova tries to
  get volume encryption metadata from cinder for rbd-backed instances,
  nothing comes back so nova isn't doing anything with volume encryption
  using it's providers (luks / cryptsetup).

  Change https://review.openstack.org/#/c/189799/ in nova adds logging
  to see this:

  Confirmed that for LVM backed Cinder we get something back:

  http://logs.openstack.org/99/189799/2/check/check-tempest-dsvm-
  full/c3ee602/logs/screen-n-cpu.txt.gz#_2015-06-09_18_18_18_078

  For Ceph we don't:

  http://logs.openstack.org/99/189799/2/check/check-tempest-dsvm-full-
  ceph/353db23/logs/screen-n-cpu.txt.gz#_2015-06-09_18_21_16_723

  This might be working as designed, I'm not sure, but I'm opening the
  bug to track the effort since if you think you have encrypted volumes
  when using ceph and nova you're probably not, so there is a false
  sense of security here which is a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1463525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463533] [NEW] generate keypair no cache directive

2015-06-09 Thread Ryan Peters
Public bug reported:

There is no cache-control directive in when generating a key/pair, which
could cause some browsers to cache the private key.

Example:
HTTP Request
GET /project/access_and_security/keypairs/testkey2/generate/ HTTP/1.1

HTTP Response:
HTTP/1.1 200 OK
Date: Mon, 20 Apr 2015 19:07:27 GMT
Server: Apache/2.4.10 (Debian)
Content-Disposition: attachment; filename=testkey2.pem
Content-Language: en
Vary: Accept-Language,Cookie
X-Frame-Options: SAMEORIGIN
Set-Cookie: sessionid="session"
Content-Length: 1675
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: application/binary
The following cache directives should be added to all sensitive information:
Cache-control: no-store
Pragma: no-cache

** Affects: horizon
 Importance: Undecided
 Assignee: Ryan Peters (rjpeter2)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ryan Peters (rjpeter2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463533

Title:
  generate keypair no cache directive

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is no cache-control directive in when generating a key/pair,
  which could cause some browsers to cache the private key.

  Example:
  HTTP Request
  GET /project/access_and_security/keypairs/testkey2/generate/ HTTP/1.1
  
  HTTP Response:
  HTTP/1.1 200 OK
  Date: Mon, 20 Apr 2015 19:07:27 GMT
  Server: Apache/2.4.10 (Debian)
  Content-Disposition: attachment; filename=testkey2.pem
  Content-Language: en
  Vary: Accept-Language,Cookie
  X-Frame-Options: SAMEORIGIN
  Set-Cookie: sessionid="session"
  Content-Length: 1675
  Keep-Alive: timeout=5, max=100
  Connection: Keep-Alive
  Content-Type: application/binary
  The following cache directives should be added to all sensitive information:
  Cache-control: no-store
  Pragma: no-cache

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463589] [NEW] rules referencing security group members expose VMs in overlapping IP scenarios

2015-06-09 Thread Kevin Benton
Public bug reported:

create SG1 an SG2 that only allow traffic to members of their own group
create two networks with same 10.0.0.0/24 CIDR
create port1 in SG1 on net1 with IP 10.0.0.1
create port2 in SG1 on net2 with IP 10.0.0.2
create port3 in SG2 on net1 with IP 10.0.0.2

port1 can communicate with port3 because of the allow rule for port2's
IP

This violates the constraints of the configured security groups.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463589

Title:
  rules referencing security group members expose VMs in overlapping IP
  scenarios

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  create SG1 an SG2 that only allow traffic to members of their own group
  create two networks with same 10.0.0.0/24 CIDR
  create port1 in SG1 on net1 with IP 10.0.0.1
  create port2 in SG1 on net2 with IP 10.0.0.2
  create port3 in SG2 on net1 with IP 10.0.0.2

  port1 can communicate with port3 because of the allow rule for port2's
  IP

  This violates the constraints of the configured security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463594] [NEW] LBaaS drivers can be queried to determine whether they support a feature the API exposes

2015-06-09 Thread Brandon Logan
Public bug reported:

If the neutron lbaas plugin could query the loaded drivers and determine
if a certain feature is supported that the API exposes, then certain
workarounds could be avoided.  This would allow the API to fail fast and
return back a meaningful error to the user explaining the issue instead
of the current options, fail slowly by throwing load balancer in ERROR
state or the driver does somethign different than what the user
intended.

Examples:
1) Currently the API allows for a PING health check, but HAProxy does not 
support this, but instead of throwing an error, it just uses a TCP health check 
underneath.

2) This would allow a driver to be able to support a workflow in which
it is responsible for allocating the VIP in whatever way it wants.
Currently a neutron port is created for the VIP in the plugin and passed
down to the driver.  A driver may want to do something different than
that to create a VIP.

One cause for concern is with this implemented, the neutron lbaas API
behavior would not be the same across all deployments.  However, with
the introduction of flavors and it causing the same thing, this
shouldn't be a problem if flavors is not a problem.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas rfe

** Description changed:

  If the neutron lbaas plugin could query the loaded drivers and determine
  if a certain feature is supported that the API exposes, then certain
  workarounds could be avoided.  This would allow the API to fail fast and
  return back a meaningful error to the user explaining the issue instead
  of the current options, fail slowly by throwing load balancer in ERROR
  state or the driver does somethign different than what the user
  intended.
  
  Examples:
  1) Currently the API allows for a PING health check, but HAProxy does not 
support this, but instead of throwing an error, it just uses a TCP health check 
underneath.
  
  2) This would allow a driver to be able to support a workflow in which
  it is responsible for allocating the VIP in whatever way it wants.
  Currently a neutron port is created for the VIP in the plugin and passed
  down to the driver.  A driver may want to do something different than
  that to create a VIP.
+ 
+ One cause for concern is with this implemented, the neutron lbaas API
+ behavior would not be the same across all deployments.  However, with
+ the introduction of flavors and it causing the same thing, this
+ shouldn't be a problem if flavors is not a problem.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463594

Title:
  LBaaS drivers can be queried to determine whether they support a
  feature the API exposes

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If the neutron lbaas plugin could query the loaded drivers and
  determine if a certain feature is supported that the API exposes, then
  certain workarounds could be avoided.  This would allow the API to
  fail fast and return back a meaningful error to the user explaining
  the issue instead of the current options, fail slowly by throwing load
  balancer in ERROR state or the driver does somethign different than
  what the user intended.

  Examples:
  1) Currently the API allows for a PING health check, but HAProxy does not 
support this, but instead of throwing an error, it just uses a TCP health check 
underneath.

  2) This would allow a driver to be able to support a workflow in which
  it is responsible for allocating the VIP in whatever way it wants.
  Currently a neutron port is created for the VIP in the plugin and
  passed down to the driver.  A driver may want to do something
  different than that to create a VIP.

  One cause for concern is with this implemented, the neutron lbaas API
  behavior would not be the same across all deployments.  However, with
  the introduction of flavors and it causing the same thing, this
  shouldn't be a problem if flavors is not a problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423165] Re: https: client can cause nova/cinder to leak sockets for 'get' 'show' 'delete' 'update'

2015-06-09 Thread John Griffith
Going to close it for Cinder as well, as I don't know of a way to fix a
broken glanceclient from the consumer end.

If you're interested however I did throw together a patched version of 0.14.2 
here:
https://github.com/j-griffith/python-glanceclient/tree/stable/icehouse

Maybe you or somebody else could test it out, and we could convince the
glance folks to push a branch for it; or people that need it can maybe
just use it.

Thanks

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423165

Title:
  https: client can cause nova/cinder to leak sockets for 'get' 'show'
  'delete' 'update'

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Glance:
  Fix Released

Bug description:
  
  Other OpenStack services which instantiate a 'https' glanceclient using
  ssl_compression=False and insecure=False (eg Nova, Cinder) are leaking
  sockets due to glanceclient not closing the connection to the Glance
  server.
  
  This could happen for a sub-set of calls, eg 'show', 'delete', 'update'.
  
  netstat -nopd would show the sockets would hang around forever:
  
  ... 127.0.0.1:9292  ESTABLISHED 9552/python  off (0.00/0/0)
  
  urllib's ConnectionPool relies on the garbage collector to tear down
  sockets which are no longer in use. The 'verify_callback' function used to
  validate SSL certs was holding a reference to the VerifiedHTTPSConnection
  instance which prevented the sockets being torn down.

  
  --

  to reproduce, set up devstack with nova talking to glance over https (must be 
performing full cert verification) and
  perform a nova operation such as:

  
   $ nova image-meta 53854ea3-23ed-4682-abf7-8415f2d6b7d9 set foo=bar

  you will see connections from nova to glance which have no timeout
  (off):

   $ netstat -nopd | grep 9292

   tcp0  0 127.0.0.1:34204 127.0.0.1:9292
  ESTABLISHED 9552/python  off (0.00/0/0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1423165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463651] [NEW] test_get_guest_config_os_command_empty fails with the new ImageMetaProps object

2015-06-09 Thread Vladik Romanovsky
Public bug reported:

test_get_guest_config_os_command_empty is incompatible with the new
ImageMetaProps object, as empty values are not for os_command_line field
are not allowed.

Traceback (most recent call last):
File "nova/tests/unit/virt/libvirt/test_driver.py", line 3809, in 
test_get_guest_config_os_command_empty
 image_meta, disk_info)
File "nova/virt/libvirt/driver.py", line 4172, in _get_guest_config
 flavor, virt_type)
File "nova/virt/libvirt/vif.py", line 378, in get_config
  inst_type, virt_type)
File "nova/virt/libvirt/vif.py", line 162, in get_config_bridge
  inst_type, virt_type)
File "nova/virt/libvirt/vif.py", line 111, in get_base_config
  use_osinfo).network_model
File "nova/virt/osinfo.py", line 119, in __init__
  image_meta = objects.ImageMeta.from_dict(image_meta)
File "nova/objects/image_meta.py", line 80, in from_dict
  image_meta.get("properties", {}))
File "nova/objects/image_meta.py", line 382, in from_dict
  obj._set_attr_from_current_names(image_props)
File "nova/objects/image_meta.py", line 362, in _set_attr_from_current_names
  setattr(self, key, image_props[key])
File "nova/objects/base.py", line 74, in setter
  field_value = field.coerce(self, name, value)
File "nova/objects/fields.py", line 201, in coerce
  return self._null(obj, attr)
File "nova/objects/fields.py", line 179, in _null
  raise ValueError(_("Field `%s' cannot be None") % attr)
ValueError: Field `os_command_line' cannot be None

** Affects: nova
 Importance: Undecided
 Assignee: Vladik Romanovsky (vladik-romanovsky)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Vladik Romanovsky (vladik-romanovsky)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463651

Title:
  test_get_guest_config_os_command_empty fails with the new
  ImageMetaProps object

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  test_get_guest_config_os_command_empty is incompatible with the new
  ImageMetaProps object, as empty values are not for os_command_line field
  are not allowed.

  Traceback (most recent call last):
  File "nova/tests/unit/virt/libvirt/test_driver.py", line 3809, in 
test_get_guest_config_os_command_empty
   image_meta, disk_info)
  File "nova/virt/libvirt/driver.py", line 4172, in _get_guest_config
   flavor, virt_type)
  File "nova/virt/libvirt/vif.py", line 378, in get_config
inst_type, virt_type)
  File "nova/virt/libvirt/vif.py", line 162, in get_config_bridge
inst_type, virt_type)
  File "nova/virt/libvirt/vif.py", line 111, in get_base_config
use_osinfo).network_model
  File "nova/virt/osinfo.py", line 119, in __init__
image_meta = objects.ImageMeta.from_dict(image_meta)
  File "nova/objects/image_meta.py", line 80, in from_dict
image_meta.get("properties", {}))
  File "nova/objects/image_meta.py", line 382, in from_dict
obj._set_attr_from_current_names(image_props)
  File "nova/objects/image_meta.py", line 362, in _set_attr_from_current_names
setattr(self, key, image_props[key])
  File "nova/objects/base.py", line 74, in setter
field_value = field.coerce(self, name, value)
  File "nova/objects/fields.py", line 201, in coerce
return self._null(obj, attr)
  File "nova/objects/fields.py", line 179, in _null
raise ValueError(_("Field `%s' cannot be None") % attr)
  ValueError: Field `os_command_line' cannot be None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463655] [NEW] test_model_kvm_bogus fails with ImageMetaProps object

2015-06-09 Thread Vladik Romanovsky
Public bug reported:

test_model_kvm_bogus is testing against a non valid network model value.
When trying to use the ImageMetaProps object with the image_meta from the test 
it fails.

2015-06-08 21:58:49.682 | Traceback (most recent call last):
2015-06-08 21:58:49.682 |   File 
"nova/tests/unit/virt/libvirt/test_vif.py", line 519, in test_model_kvm_bogus
2015-06-08 21:58:49.682 | image_meta)
..
..
..
2015-06-08 21:58:49.684 |   File "nova/virt/osinfo.py", line 119, in 
__init__
2015-06-08 21:58:49.684 | image_meta = 
objects.ImageMeta.from_dict(image_meta)
2015-06-08 21:58:49.684 |   File "nova/objects/image_meta.py", line 80, in 
from_dict
2015-06-08 21:58:49.684 | image_meta.get("properties", {}))
2015-06-08 21:58:49.684 |   File "nova/objects/image_meta.py", line 382, in 
from_dict
2015-06-08 21:58:49.685 | obj._set_attr_from_current_names(image_props)
2015-06-08 21:58:49.685 |   File "nova/objects/image_meta.py", line 362, in 
_set_attr_from_current_names
2015-06-08 21:58:49.685 | setattr(self, key, image_props[key])
2015-06-08 21:58:49.685 |   File "nova/objects/base.py", line 74, in setter
2015-06-08 21:58:49.685 | field_value = field.coerce(self, name, value)
2015-06-08 21:58:49.685 |   File "nova/objects/fields.py", line 203, in 
coerce
2015-06-08 21:58:49.685 | return self._type.coerce(obj, attr, value)
2015-06-08 21:58:49.685 |   File "nova/objects/fields.py", line 480, in 
coerce
2015-06-08 21:58:49.685 | return super(VIFModel, self).coerce(obj, 
attr, value)
2015-06-08 21:58:49.685 |   File "nova/objects/fields.py", line 284, in 
coerce
2015-06-08 21:58:49.685 | raise ValueError(msg)
2015-06-08 21:58:49.686 | ValueError: Field value acme is invalid

** Affects: nova
 Importance: Undecided
 Assignee: Vladik Romanovsky (vladik-romanovsky)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Vladik Romanovsky (vladik-romanovsky)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463655

Title:
  test_model_kvm_bogus fails with ImageMetaProps object

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  test_model_kvm_bogus is testing against a non valid network model value.
  When trying to use the ImageMetaProps object with the image_meta from the 
test it fails.

  2015-06-08 21:58:49.682 | Traceback (most recent call last):
  2015-06-08 21:58:49.682 |   File 
"nova/tests/unit/virt/libvirt/test_vif.py", line 519, in test_model_kvm_bogus
  2015-06-08 21:58:49.682 | image_meta)
  ..
  ..
  ..
  2015-06-08 21:58:49.684 |   File "nova/virt/osinfo.py", line 119, in 
__init__
  2015-06-08 21:58:49.684 | image_meta = 
objects.ImageMeta.from_dict(image_meta)
  2015-06-08 21:58:49.684 |   File "nova/objects/image_meta.py", line 80, 
in from_dict
  2015-06-08 21:58:49.684 | image_meta.get("properties", {}))
  2015-06-08 21:58:49.684 |   File "nova/objects/image_meta.py", line 382, 
in from_dict
  2015-06-08 21:58:49.685 | 
obj._set_attr_from_current_names(image_props)
  2015-06-08 21:58:49.685 |   File "nova/objects/image_meta.py", line 362, 
in _set_attr_from_current_names
  2015-06-08 21:58:49.685 | setattr(self, key, image_props[key])
  2015-06-08 21:58:49.685 |   File "nova/objects/base.py", line 74, in 
setter
  2015-06-08 21:58:49.685 | field_value = field.coerce(self, name, 
value)
  2015-06-08 21:58:49.685 |   File "nova/objects/fields.py", line 203, in 
coerce
  2015-06-08 21:58:49.685 | return self._type.coerce(obj, attr, value)
  2015-06-08 21:58:49.685 |   File "nova/objects/fields.py", line 480, in 
coerce
  2015-06-08 21:58:49.685 | return super(VIFModel, self).coerce(obj, 
attr, value)
  2015-06-08 21:58:49.685 |   File "nova/objects/fields.py", line 284, in 
coerce
  2015-06-08 21:58:49.685 | raise ValueError(msg)
  2015-06-08 21:58:49.686 | ValueError: Field value acme is invalid

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463656] [NEW] update_port_status always looks up network

2015-06-09 Thread Kevin Benton
Public bug reported:

update_port_status calls get_network to construct the PortContext object
for the port. However, there are cases where the caller has already
looked up the network (e.g. get_device_details) so this double-lookup is
a waste of resources.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463656

Title:
  update_port_status always looks up network

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  update_port_status calls get_network to construct the PortContext
  object for the port. However, there are cases where the caller has
  already looked up the network (e.g. get_device_details) so this
  double-lookup is a waste of resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463658] [NEW] Firewall Rules table missing display of the "Shared" attribute and its value

2015-06-09 Thread vishwanath jayaraman
Public bug reported:

When firewall rule is created from Horizon UI, it provides a checkbox to
enable 'Sharing' the rule. However the Firewall Rules table does not
have a column for displaying the 'Shared' attribute and its value as can
be seen in the attached screenshot

** Affects: horizon
 Importance: Undecided
 Assignee: vishwanath jayaraman (vishwanathj)
 Status: New


** Tags: fwaas

** Attachment added: "SharedAttributeAndValueMissing.png"
   
https://bugs.launchpad.net/bugs/1463658/+attachment/4412360/+files/SharedAttributeAndValueMissing.png

** Project changed: neutron => horizon

** Changed in: horizon
 Assignee: (unassigned) => vishwanath jayaraman (vishwanathj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463658

Title:
  Firewall Rules table missing display of the "Shared" attribute and its
  value

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When firewall rule is created from Horizon UI, it provides a checkbox
  to enable 'Sharing' the rule. However the Firewall Rules table does
  not have a column for displaying the 'Shared' attribute and its value
  as can be seen in the attached screenshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463665] [NEW] Missing requirement for PLUMgrid Neutron Plugin

2015-06-09 Thread Fawad Khaliq
Public bug reported:

Missing networking-plumgrid in requirement for PLUMgrid Neutron Plugin

** Affects: neutron
 Importance: Undecided
 Assignee: Fawad Khaliq (fawadkhaliq)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Fawad Khaliq (fawadkhaliq)

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463665

Title:
  Missing requirement for PLUMgrid Neutron Plugin

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Missing networking-plumgrid in requirement for PLUMgrid Neutron Plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp