[Yahoo-eng-team] [Bug 1567246] [NEW] Proxy server returning response code 404 instead of 503

2016-04-07 Thread rohita joshi
Public bug reported:

On execution of create object metadata use case using following curl
command, on an environment where container server is down,

curl -i // -X POST -H "X-Auth-
Token: " -H "X-Object-Meta-: "

Http response code 404 is returned i.e., resource not found. While in
logs container server refuses to connect with response code 503 i.e.,
service unavailable. The proxy should return HTTP response 503 , as
container services are stopped, hence unavailable. Response code 404
mislead the user as it signifies that container server is working but
the container, for which metadata is being created, is not found.

Cause: After getting 503 from container server multiple times, proxy
server send POST request to object server which responds with error code
404 i.e., requested object not found. This response is sent to the user
which is not correct. Proxy server should return error code 503, as the
actual error is coming due to unavailability of Container server.

** Affects: keystone
 Importance: Undecided
 Assignee: rohita joshi (rjoshi16)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => rohita joshi (rjoshi16)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1567246

Title:
  Proxy server returning response code 404 instead of 503

Status in OpenStack Identity (keystone):
  New

Bug description:
  On execution of create object metadata use case using following curl
  command, on an environment where container server is down,

  curl -i // -X POST -H "X
  -Auth-Token: " -H "X-Object-Meta-: "

  Http response code 404 is returned i.e., resource not found. While in
  logs container server refuses to connect with response code 503 i.e.,
  service unavailable. The proxy should return HTTP response 503 , as
  container services are stopped, hence unavailable. Response code 404
  mislead the user as it signifies that container server is working but
  the container, for which metadata is being created, is not found.

  Cause: After getting 503 from container server multiple times, proxy
  server send POST request to object server which responds with error
  code 404 i.e., requested object not found. This response is sent to
  the user which is not correct. Proxy server should return error code
  503, as the actual error is coming due to unavailability of Container
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1567246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567253] [NEW] Should be space in between two words

2016-04-07 Thread Pankaj Mishra
Public bug reported:

There is no space in between two words.

pankaj@pankaj-VirtualBox:~/DevStack/devstack$ glance md-tag-create --name ab 
new-ns
404 Not Found: Metadata definition namespace=new-nswas not found. (HTTP 404).

Expected result: should be space in between two word.

** Affects: glance
 Importance: Undecided
 Assignee: Pankaj Mishra (pankaj-mishra)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Pankaj Mishra (pankaj-mishra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1567253

Title:
  Should be space in between two words

Status in Glance:
  In Progress

Bug description:
  There is no space in between two words.

  pankaj@pankaj-VirtualBox:~/DevStack/devstack$ glance md-tag-create --name ab 
new-ns
  404 Not Found: Metadata definition namespace=new-nswas not found. (HTTP 404).

  Expected result: should be space in between two word.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1567253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564820] Re: DSCP rules won't get updated on ports

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300635
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6b6c0421bb6658b475bddb68f766b395945e1b46
Submitter: Jenkins
Branch:master

commit 6b6c0421bb6658b475bddb68f766b395945e1b46
Author: Nate Johnston 
Date:   Fri Apr 1 15:53:20 2016 -0400

QoS DSCP use mod_flow instead of mod_flows

In the implementation of DSCP QoS rule, the QosOVSAgentDriver uses the
wrong method to modify br-int flows. It uses br_int.mod_flows() whilst
it should use br_int.mod_flow().

This patch fixes this and also adds verification of the updates of DSCP,
as we have for bandwidth here to trigger that code path and avoid
regressions.

Change-Id: I685ac373701ff8407fd7fbf649e17a2f7dfc0008
Closes-Bug: #1564820


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564820

Title:
  DSCP rules won't get updated on ports

Status in neutron:
  Fix Released

Bug description:
  When updating DSCP rules, ovs agent will fail

  
  Reason
  ==
  In the implementation of DSCP QoS rule, the QosOVSAgentDriver uses the wrong 
method to modify br-int flows. It uses br_int.mod_flows() whilst it should use 
br_int.mod_flow():

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L89

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1564820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563883] Re: test_net_ip_availability_after_subnet_and_ports failed in _assert_total_and_used_ips

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/299647
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=27634bb2ba2637d694c7d0aa5758173d12ef579a
Submitter: Jenkins
Branch:master

commit 27634bb2ba2637d694c7d0aa5758173d12ef579a
Author: Armando Migliaccio 
Date:   Wed Mar 30 14:24:58 2016 -0700

Fix race conditions in IP availability API tests

DHCP port creation is asynchronous with subnet creation.
Therefore there is a time window where, depending on how fast
the DHCP agent honors the request, the DHCP port IP allocation
may or may not be accounted for in the total number of used IP
for the network. To kill the race, do not run dhcp on the
created subnets at all.

Closes-bug: 1563883

Change-Id: Idda25e65d04852d68a3c160cc9deefdb4ee82dcd


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563883

Title:
  test_net_ip_availability_after_subnet_and_ports failed in
  _assert_total_and_used_ips

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/43/299243/3/check/gate-neutron-dsvm-
  api/ab12586/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/api/test_network_ip_availability.py", line 138, in 
test_net_ip_availability_after_subnet_and_ports
  network, net_availability)
File "neutron/tests/api/test_network_ip_availability.py", line 79, in 
_assert_total_and_used_ips
  self.assertEqual(expected_used, availability['used_ips'])
File 
"/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 362, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 447, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 2 != 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564834] Re: With Ceph horizon containers page raises error "Unable to get the Swift service info"

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301110
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=80e52c1ae5e52abcc83c8efbb469a0d0ce020815
Submitter: Jenkins
Branch:master

commit 80e52c1ae5e52abcc83c8efbb469a0d0ce020815
Author: Timur Sufiev 
Date:   Mon Apr 4 16:05:35 2016 +0300

Fix new Swift UI to work with Ceph backend

First, tolerate missing '/info' API endpoint, which Ceph doesn't
support yet. Second, `content_type` attribute on objects may be not
set, don't rely heavily on it.

Change-Id: I101338aa9c96a6551bfbf2dd9c460a4801b4e7b6
Closes-Bug: #1564834


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1564834

Title:
  With Ceph horizon containers page raises error "Unable to get the
  Swift service info"

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps:
    - Deploy cluster with ceph as storage backends.
    - Login as admin to horizon
    - Go to Project -> Containers
  -> There is alert "Error: Unable to get the Swift service info."
    - Create container and enter into it
    - Upload a file
  -> File is uploaded and objects count is incremented, but file isn't shown
    - Create a folder
  -> Folder is created and objects count is incremented, but folder isn't 
shown

  Problems:
- page alert "Error: Unable to get the Swift service info."
    - Not possible to view and manupulate uploaded files and folders

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1564834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567302] [NEW] Need Code refactor in glance

2016-04-07 Thread Deepak Jon
Public bug reported:

There is one extra  new line by enter key, Message should start in same line 
without enter key.
For more details check below link:
https://github.com/openstack/glance/blob/master/glance/common/exception.py#L530-L531

** Affects: glance
 Importance: Undecided
 Assignee: Deepak Jon (deepak-kumar-9)
 Status: Confirmed

** Changed in: glance
 Assignee: (unassigned) => Deepak Jon (deepak-kumar-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1567302

Title:
  Need Code refactor in glance

Status in Glance:
  Confirmed

Bug description:
  There is one extra  new line by enter key, Message should start in same line 
without enter key.
  For more details check below link:
  
https://github.com/openstack/glance/blob/master/glance/common/exception.py#L530-L531

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1567302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567246] Re: Proxy server returning response code 404 instead of 503

2016-04-07 Thread rohita joshi
** Project changed: keystone => swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1567246

Title:
  Proxy server returning response code 404 instead of 503

Status in OpenStack Object Storage (swift):
  New

Bug description:
  On execution of create object metadata use case using following curl
  command, on an environment where container server is down,

  curl -i // -X POST -H "X
  -Auth-Token: " -H "X-Object-Meta-: "

  Http response code 404 is returned i.e., resource not found. While in
  logs container server refuses to connect with response code 503 i.e.,
  service unavailable. The proxy should return HTTP response 503 , as
  container services are stopped, hence unavailable. Response code 404
  mislead the user as it signifies that container server is working but
  the container, for which metadata is being created, is not found.

  Cause: After getting 503 from container server multiple times, proxy
  server send POST request to object server which responds with error
  code 404 i.e., requested object not found. This response is sent to
  the user which is not correct. Proxy server should return error code
  503, as the actual error is coming due to unavailability of Container
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/swift/+bug/1567246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567317] [NEW] wrong link to attached vm from volumes list

2016-04-07 Thread Martin Pavlásek
Public bug reported:

1. log in as 'demo' user
2. create instance 'test'
3. create volume 'test_volume'
4. via Manage attachments dialogue, attach 'test_volume' to 'test' instance
5. log out
6. log in as 'admin'
7. navigate Admin - System - Volumes
8. click to link 'test' (name of VM) under 'Attached To' column

Now let's think about to terminate VM, that has 'test_volume' attached.

Current results:
you are redirected to:
/dashboard/project/instances/uuid-of-instace/

It is wrong, because 'admin' user is part of 'admin' project only. 'demo' user 
is member of 'demo' project only as well. So the link shows everything as 
expected, but if you try to terminate instance by drop-down menu, you will not 
see any notification (success neither failure) and actually you are redirected 
to:
/dashboard/project/instances/

Expected result:
click to VM name in list of volumes will navigate user to:
/dashboard/admin/instances/uuid-of-instace/detail

Attempt to terminate the VM will be succeded, notification would be displayed 
and user will land on:
/dashboard/admin/instances/

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567317

Title:
  wrong link to attached vm from volumes list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. log in as 'demo' user
  2. create instance 'test'
  3. create volume 'test_volume'
  4. via Manage attachments dialogue, attach 'test_volume' to 'test' instance
  5. log out
  6. log in as 'admin'
  7. navigate Admin - System - Volumes
  8. click to link 'test' (name of VM) under 'Attached To' column

  Now let's think about to terminate VM, that has 'test_volume'
  attached.

  Current results:
  you are redirected to:
  /dashboard/project/instances/uuid-of-instace/

  It is wrong, because 'admin' user is part of 'admin' project only. 'demo' 
user is member of 'demo' project only as well. So the link shows everything as 
expected, but if you try to terminate instance by drop-down menu, you will not 
see any notification (success neither failure) and actually you are redirected 
to:
  /dashboard/project/instances/

  Expected result:
  click to VM name in list of volumes will navigate user to:
  /dashboard/admin/instances/uuid-of-instace/detail

  Attempt to terminate the VM will be succeded, notification would be displayed 
and user will land on:
  /dashboard/admin/instances/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506432] Re: Detach Interface Action should not be shown if there no interfaces are attached

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/235820
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=701d75f9d5afc741ed1c5040c9b219acf53899b9
Submitter: Jenkins
Branch:master

commit 701d75f9d5afc741ed1c5040c9b219acf53899b9
Author: Saravanan KR 
Date:   Fri Oct 16 09:38:47 2015 +

Prevent 'Detach Interface' action if an interface is not attached

In the Instance action menu of Instance panel, 'Detach Interface' action
need not be displayed if the instance does not have an attached interface.
A check has been added to validate if there are any 'fixed' IP address is
associated with the instance, to enable 'Detach Interface' action menu.

Change-Id: I73d4c052b4aa12a50887c220d1bcd3a0b3f9a44d
Closes-Bug: 1506432


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506432

Title:
  Detach Interface Action should not be shown if there no interfaces are
  attached

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When the Instance is not associated with any interface, Detach
  Interface Action should not be show in the instance panel

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567334] [NEW] RFE: Add ethertype field when creating security group rule

2016-04-07 Thread Eran Kuris
Public bug reported:

In CLI when creating security group rule there is separation between IPv4 & 
IPv6 type .
I would like to add in Horizon this  option so we have separation between IPv4 
& IPv6  rules . 

In cli: 
neutron security-group-rule-create  --direction ingress --ethertype IPv6

Version: 
Liberty

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567334

Title:
  RFE: Add ethertype field when creating security group rule

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In CLI when creating security group rule there is separation between IPv4 & 
IPv6 type .
  I would like to add in Horizon this  option so we have separation between 
IPv4 & IPv6  rules . 

  In cli: 
  neutron security-group-rule-create  --direction ingress --ethertype 
IPv6

  Version: 
  Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567336] [NEW] instance_info_cache_update() is not retried on deadlock

2016-04-07 Thread Roman Podoliaka
Public bug reported:

Description
=

When Galera is used in multi-writer mode it's possible that
instance_info_cache_update()  DB API method will be called for the very
same database row concurrently on two different MySQL servers. Due to
how Galera works internally, it will cause a deadlock exception for one
of the callers (see http://www.joinfu.com/2015/01/understanding-
reservations-concurrency-locking-in-nova/ for details).

instance_info_cache_update() is not currently retried on deadlock.
Should it happen an operation in question may fail, e.g. association of
a floating IP.


Steps to reproduce
===

1. Deploy Galera cluster in multi-writer mode.
2. Ensure there is at least two nova-conductor using two different MySQL 
servers in the Galera cluster.
3. Create an instance.
4. Associate / disassociate floating IPs concurrently (e.g. via Rally)


Expected result
=

All associate / disassociate operations succeed.


Actual result
==

One or more operations fail with an exception in python-novaclient:

  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 662, 
in remove_floating_ip
self._action('removeFloatingIp', server, {'address': address})
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1279, 
in _action
return self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 449, in 
post
return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 424, in 
_cs_request
resp, body = self._time_request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in 
_time_request
resp, body = self.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in 
request
raise exceptions.from_response(resp, body, url, method)
ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-ac412e1c-afcf-4ef3-accc-b5463805ca74)


Environment
==

OpenStack Liberty
Galera cluster (3 nodes) running in multiwriter mode

** Affects: nova
 Importance: Medium
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567336

Title:
  instance_info_cache_update() is not retried on deadlock

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  =

  When Galera is used in multi-writer mode it's possible that
  instance_info_cache_update()  DB API method will be called for the
  very same database row concurrently on two different MySQL servers.
  Due to how Galera works internally, it will cause a deadlock exception
  for one of the callers (see http://www.joinfu.com/2015/01
  /understanding-reservations-concurrency-locking-in-nova/ for details).

  instance_info_cache_update() is not currently retried on deadlock.
  Should it happen an operation in question may fail, e.g. association
  of a floating IP.

  
  Steps to reproduce
  ===

  1. Deploy Galera cluster in multi-writer mode.
  2. Ensure there is at least two nova-conductor using two different MySQL 
servers in the Galera cluster.
  3. Create an instance.
  4. Associate / disassociate floating IPs concurrently (e.g. via Rally)

  
  Expected result
  =

  All associate / disassociate operations succeed.

  
  Actual result
  ==

  One or more operations fail with an exception in python-novaclient:

File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 662, 
in remove_floating_ip
  self._action('removeFloatingIp', server, {'address': address})
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 
1279, in _action
  return self.api.client.post(url, body=body)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 449, in 
post
  return self._cs_request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 424, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in 
request
  raise exceptions.from_response(resp, body, url, method)
  ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-ac412e1c-afcf-4ef3-accc-b5463805ca74)

  
  Environment
  ==

  OpenStack Liberty
  Galera cluster (3 nodes) running in multiwriter mode

To man

[Yahoo-eng-team] [Bug 1562488] Re: "Set as Active Project" menu is shown even though the project is disabled

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/298026
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e05a8fe574ce7c48003ae260581c4b45917565c1
Submitter: Jenkins
Branch:master

commit e05a8fe574ce7c48003ae260581c4b45917565c1
Author: Kenji Ishii 
Date:   Sun Mar 27 10:52:22 2016 +

Hide project switch menu when project is disabled

Same as a project list on header area, hide project switch menu
in project list page when project is disabled.

Change-Id: I9e8d893cf6e89f4c5c32d5e3e687977ef3000631
Closes-bug: #1562488


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562488

Title:
  "Set as Active Project" menu is shown even though the project is
  disabled

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Project selection pulldown in header is controlled to display only enabled 
projects.
  However, in project list page, "Set as Active Project" menu is displayed even 
thought a project is disabled.
  And error message "Project switch failed for user xxx" is displayed.
  It should be improve not to show if the project is disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566857] Re: Keystone authtoken middleware seems to work wrong with memcached cache

2016-04-07 Thread Dina Belova
Was affected by the same environmental issue as
https://bugs.launchpad.net/keystone/+bug/1566835

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1566857

Title:
  Keystone authtoken middleware seems to work wrong with memcached cache

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  == Abstract ==

  During Keystone OSprofiler integration it was a wish to check how does
  Keystone was changed from Liberty to Mitaka in regarding DB/Caching
  layers workability. There were lots of changes related to federation
  support added and because of movement to slo.cache usage.

  Ideas of the experiment can be found here:
  http://docs.openstack.org/developer/performance-
  docs/test_plans/keystone/plan.html

  == What was discovered ==

  Preliminary results can be found here -
  http://docs.openstack.org/developer/performance-
  docs/test_results/keystone/all-in-one/index.html

  In short: two identical Keystone API calls were done to both Liberty
  and Mitaka env. To make keystone authtoken middleware used let's
  choose nova boot request. The second call was profiled using
  OSprofiler - and compared between Liberty and Mitaka env.

  Both env had the same Apache config, the same Keystone authtoken cache
  config in the services. For instance, for Nova:

  [keystone_authtoken]
  memcached_servers = 10.0.2.15:11211
  signing_dir = /var/cache/nova
  cafile = /opt/stack/data/ca-bundle.pem
  auth_uri = http://10.0.2.15:5000
  project_domain_id = default
  project_name = service
  user_domain_id = default
  password = password
  username = nova
  auth_url = http://10.0.2.15:35357
  auth_type = password

  On Liberty only the first keystone authmiddleware call from nova is
  presented in requests tree - that is expected behaviour, as after this
  in case of memcached backend usage cached authentication values should
  be used by other services. So we see no more keystone calls in the
  requests tree. In Mitaka all API calls to nova. glance, neutron are
  paired with API call to the keystone.

  Liberty call example: http://dinabelova.github.io/liberty_server_create.html
  Mitaka call example: http://dinabelova.github.io/mitaka_server_create.html

  == Note ==

  This might (?) be related to the
  https://bugs.launchpad.net/keystone/+bug/1566835 issue - I'm not sure,
  this needs to be investigated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1566857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567369] [NEW] Added server tags support in nova-api

2016-04-07 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/268932
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 537df23d85e0f7c461643efe6b6501d267ae99d0
Author: Sergey Nikitin 
Date:   Fri Jan 15 17:11:05 2016 +0300

Added server tags support in nova-api

Added new API microversion which allows the following:
- add tag to the server
- replace set of server tags with new set of tags
- get information about server, including list of tags for server
- get just list of tags for server
- check if tag exists on a server
- remove specified tag from server
- remove all tags from server
- search servers by tags

DocImpact
APIImpact

Implements: blueprint tag-instances

Change-Id: I9573aa52aae9f49945d8806ca5e52ada29fb087a

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567369

Title:
  Added server tags support in nova-api

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/268932
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 537df23d85e0f7c461643efe6b6501d267ae99d0
  Author: Sergey Nikitin 
  Date:   Fri Jan 15 17:11:05 2016 +0300

  Added server tags support in nova-api
  
  Added new API microversion which allows the following:
  - add tag to the server
  - replace set of server tags with new set of tags
  - get information about server, including list of tags for server
  - get just list of tags for server
  - check if tag exists on a server
  - remove specified tag from server
  - remove all tags from server
  - search servers by tags
  
  DocImpact
  APIImpact
  
  Implements: blueprint tag-instances
  
  Change-Id: I9573aa52aae9f49945d8806ca5e52ada29fb087a

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567374] [NEW] inconsistent URL of detail of instance

2016-04-07 Thread Martin Pavlásek
Public bug reported:

Would be nice, if details of VM would be in same format ('uuid' or
'uuid/detail'), let's take a look at this example from 'demo' user view
and 'admin' user:

/dashboard/project/instances/b7fe13de-8877-4277-bbd2-da57a818bb7e/
/dashboard/admin/instances/b7fe13de-8877-4277-bbd2-da57a818bb7e/detail

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567374

Title:
  inconsistent URL of detail of instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Would be nice, if details of VM would be in same format ('uuid' or
  'uuid/detail'), let's take a look at this example from 'demo' user
  view and 'admin' user:

  /dashboard/project/instances/b7fe13de-8877-4277-bbd2-da57a818bb7e/
  /dashboard/admin/instances/b7fe13de-8877-4277-bbd2-da57a818bb7e/detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567393] [NEW] remove menu item "disable user" for admin

2016-04-07 Thread Sergei Chipiga
Public bug reported:

Environment:
- upstream

Steps:
- Login as admin
- Go to "Identity" -> "Users"
- Click dropdown actions menu for admin

Expected result:
- Only "Change Password" is present

Actual result:
- There is item "Disable user" but it's disabled. And any case it's not 
possible to disable admin. It's better to remove such item in order not to 
confuse an user.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon-core

** Attachment added: "disabled_admin_menu_item.png"
   
https://bugs.launchpad.net/bugs/1567393/+attachment/4627449/+files/disabled_admin_menu_item.png

** Description changed:

  Environment:
  - upstream
  
  Steps:
  - Login as admin
  - Go to "Identity" -> "Users"
  - Click dropdown actions menu for admin
  
  Expected result:
  - Only "Change Password" is present
  
  Actual result:
- - There is item "Disable user" but it's disabled. And any case it's not 
possible to disable admin. It's better to remove such item in order not to 
confuse a user.
+ - There is item "Disable user" but it's disabled. And any case it's not 
possible to disable admin. It's better to remove such item in order not to 
confuse an user.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567393

Title:
  remove menu item "disable user" for admin

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Environment:
  - upstream

  Steps:
  - Login as admin
  - Go to "Identity" -> "Users"
  - Click dropdown actions menu for admin

  Expected result:
  - Only "Change Password" is present

  Actual result:
  - There is item "Disable user" but it's disabled. And any case it's not 
possible to disable admin. It's better to remove such item in order not to 
confuse an user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561233] Re: "Failed to format sample" warning in neutron.conf.sample file

2016-04-07 Thread Henry Gessau
*** This bug is a duplicate of bug 1548433 ***
https://bugs.launchpad.net/bugs/1548433

This is fixed by https://review.openstack.org/292640

** No longer affects: oslo.config

** This bug has been marked a duplicate of bug 1548433
   neutron returns objects other than oslo_config.cfg.Opt instances from 
list_opts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561233

Title:
  "Failed to format sample" warning in neutron.conf.sample file

Status in neutron:
  New

Bug description:
  After generating the neutron configuration files, the following
  warnings appear in the [nova] section of the neutron.conf.sample file:

  #
  # From nova.auth
  #

  # Warning: Failed to format sample for auth_url
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for default_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for default_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for password
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for tenant_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for tenant_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for trust_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for username
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567405] [NEW] no warning message for invalid email with inline user edit

2016-04-07 Thread Sergei Chipiga
Public bug reported:

Environment:
- upstream

Steps:
- Login as admin
- Go to "Identity" -> "Users"
- Move cursor to user e-mail and click icon "edit"
- Type "test@email:" to e-mail field and click icon "save"

Expected result:
- Warning message that e-mail has invalid format

Actual result:
- No notifications. Just e-mail is still in edit mode

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567405

Title:
  no warning message for invalid email with inline user edit

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Environment:
  - upstream

  Steps:
  - Login as admin
  - Go to "Identity" -> "Users"
  - Move cursor to user e-mail and click icon "edit"
  - Type "test@email:" to e-mail field and click icon "save"

  Expected result:
  - Warning message that e-mail has invalid format

  Actual result:
  - No notifications. Just e-mail is still in edit mode

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567398] [NEW] no warning message for empty username with inline user edit

2016-04-07 Thread Sergei Chipiga
Public bug reported:

Environment:
- upstream

Steps:
- Login as admin
- Go to "Identity" -> "Users"
- Move cursor to username and click icon "edit"
- Clear username field and click icon "save"

Expected result:
- Warning message that username can't be empty

Actual result:
- No notifications. Just username is still in edit mode

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  Environment:
  - upstream
  
  Steps:
  - Login as admin
  - Go to "Identity" -> "Users"
  - Move cursor to username and click icon "edit"
  - Clear username field and click icon "save"
  
  Expected result:
  - Warning message that username can't be empty
  
  Actual result:
- - No notifications. Just username is still edit mode
+ - No notifications. Just username is still in edit mode

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567398

Title:
  no warning message for empty username with inline user edit

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Environment:
  - upstream

  Steps:
  - Login as admin
  - Go to "Identity" -> "Users"
  - Move cursor to username and click icon "edit"
  - Clear username field and click icon "save"

  Expected result:
  - Warning message that username can't be empty

  Actual result:
  - No notifications. Just username is still in edit mode

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567403] [NEW] Local context cache seems to work unproperly

2016-04-07 Thread Dina Belova
Public bug reported:

== Abstract ==

I'm profiling Keystone using OSprofiler tool and the appropriate
Keystone+OSprofiler integration changes -
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic
:osprofiler-support-in-keystone - currently on review. The idea was to
analyse how does Keystone use DB/Cache layers.

== Expected ==

The local context cache added during Mitaka should cache key-value pairs
per every keystone request and grab cached value from it where possible
in memoize instead of hitting to Memcache.

== Observed ==

During the nova-boot request keystone API is hit 3 times including all
python clients work. If we'll take a look, for instance, on the
get_domain function, it's called twice per one API call, and both times
it's going to the Memcache without local cache usage.

Memcache points can appear in the trace only if memcache is *really*
used, not just try to grab value from it. Also
/opt/stack/keystone/keystone/common/cache/_context_cache.py file was
modified to check if
https://github.com/openstack/keystone/blob/master/keystone/common/cache/_context_cache.py#L78-L80
were called == local cache used. Nothing?

** Affects: keystone
 Importance: Undecided
 Status: New

** Attachment added: "server_create.html"
   
https://bugs.launchpad.net/bugs/1567403/+attachment/4627480/+files/server_create.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1567403

Title:
  Local context cache seems to work unproperly

Status in OpenStack Identity (keystone):
  New

Bug description:
  == Abstract ==

  I'm profiling Keystone using OSprofiler tool and the appropriate
  Keystone+OSprofiler integration changes -
  
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic
  :osprofiler-support-in-keystone - currently on review. The idea was to
  analyse how does Keystone use DB/Cache layers.

  == Expected ==

  The local context cache added during Mitaka should cache key-value
  pairs per every keystone request and grab cached value from it where
  possible in memoize instead of hitting to Memcache.

  == Observed ==

  During the nova-boot request keystone API is hit 3 times including all
  python clients work. If we'll take a look, for instance, on the
  get_domain function, it's called twice per one API call, and both
  times it's going to the Memcache without local cache usage.

  Memcache points can appear in the trace only if memcache is *really*
  used, not just try to grab value from it. Also
  /opt/stack/keystone/keystone/common/cache/_context_cache.py file was
  modified to check if
  
https://github.com/openstack/keystone/blob/master/keystone/common/cache/_context_cache.py#L78-L80
  were called == local cache used. Nothing?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1567403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567413] [NEW] Keystone fetches data from Memcache even if caching is explicitly turned off

2016-04-07 Thread Dina Belova
Public bug reported:

== Abstract ==

I'm profiling Keystone using OSprofiler tool and the appropriate
Keystone+OSprofiler integration changes -
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic
:osprofiler-support-in-keystone - currently on review. The idea was to
analyse how does Keystone use DB/Cache layers.

== Expected vs Observed==

I'm turning off cache via setting

[cache]
enabled = False

I'm expecting all data to be fetched from DB in this case, but I still
see gets from Memcache. I mean, *real* gets, not just tries to grab
values, but real operations happening with values got from memcache here
https://bitbucket.org/zzzeek/dogpile.cache/src/c6913eb143b24b4a886124ff0da5c935ea34e3ac/dogpile/cache/region.py?at=master&fileviewer
=file-view-default#region.py-617

Adding OSprofiler HTML report from token issue API call.

** Affects: keystone
 Importance: Undecided
 Status: New

** Attachment added: "token_issue.html"
   
https://bugs.launchpad.net/bugs/1567413/+attachment/4627498/+files/token_issue.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1567413

Title:
  Keystone fetches data from Memcache even if caching is explicitly
  turned off

Status in OpenStack Identity (keystone):
  New

Bug description:
  == Abstract ==

  I'm profiling Keystone using OSprofiler tool and the appropriate
  Keystone+OSprofiler integration changes -
  
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic
  :osprofiler-support-in-keystone - currently on review. The idea was to
  analyse how does Keystone use DB/Cache layers.

  == Expected vs Observed==

  I'm turning off cache via setting

  [cache]
  enabled = False

  I'm expecting all data to be fetched from DB in this case, but I still
  see gets from Memcache. I mean, *real* gets, not just tries to grab
  values, but real operations happening with values got from memcache
  here
  
https://bitbucket.org/zzzeek/dogpile.cache/src/c6913eb143b24b4a886124ff0da5c935ea34e3ac/dogpile/cache/region.py?at=master&fileviewer
  =file-view-default#region.py-617

  Adding OSprofiler HTML report from token issue API call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1567413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562731] Re: Using LOG.warning replace LOG.warn

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/298168
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d6491516d9e21052526fa069575740acc2f19257
Submitter: Jenkins
Branch:master

commit d6491516d9e21052526fa069575740acc2f19257
Author: Nguyen Phuong An 
Date:   Mon Mar 28 16:23:43 2016 +0700

Using LOG.warning replace LOG.warn

This patch replaces LOG.warn by LOG.warning on

https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/keystone.py#L222
to avoid DeprecationWarning.

Change-Id: I8cd9ea6778b356c3b1f4e0c6e95feb096792c58d
Closes-Bug: #1562731


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562731

Title:
  Using LOG.warning replace LOG.warn

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Python 3 deprecated the logger.warn method, see:
  https://docs.python.org/3/library/logging.html#logging.warning
  so I prefer to use warning to avoid DeprecationWarning on 
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/keystone.py#L222

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513267] Re: [SRU] network_data.json not found in openstack/2015-10-15/

2016-04-07 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:12.0.2-0ubuntu1

---
nova (2:12.0.2-0ubuntu1) wily; urgency=medium

  [ James Page ]
  * New upstream stable release (LP: #1559935).
- d/rules: Drop use of proxy discard service, not required and
  causes unit test failures with this update.

  [ Corey Bryant ]
  * d/p/network-data-json-in-configdrive.patch: Cherry pick patch to properly
inject network_data.json into the config drive (LP: #1513267).

 -- Corey Bryant   Tue, 22 Mar 2016 14:24:08
-0400

** Changed in: nova (Ubuntu Wily)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513267

Title:
  [SRU] network_data.json not found in openstack/2015-10-15/

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive liberty series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in nova source package in Wily:
  Fix Released

Bug description:
  [Impact]
  The file "network_data.json" is not found in the folder 
"openstack/2015-10-15/" of config drive.

  The result is that network_data metadata doesn't work on Liberty.

  [Testcase]
  On liberty, launch an instance with a configuration drive:
  e.g. nova boot --config-drive=true

  The network_data.json will be available in the metadata.

  Inside the instance, this should be expected to provide output:
$ sudo isoinfo -i /dev/sr0 -R -J -l | grep network_data.json

  Or
$ mkdir mp; sudo mount /dev/sr0 mp; find mp | grep network_data.json

  [Regression]
  The regression potential is minimal.  The fix has already landed in Mitaka 
and cherry-picked patch required no code changes for Liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1513267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548433] Re: neutron returns objects other than oslo_config.cfg.Opt instances from list_opts

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/292640
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c3db0707eff70f381913643891ba4e148977407d
Submitter: Jenkins
Branch:master

commit c3db0707eff70f381913643891ba4e148977407d
Author: Jamie Lennox 
Date:   Tue Mar 15 10:05:29 2016 +1100

Return oslo_config Opts to config generator

We shouldn't be returning keystoneauth Opts to the oslo_config
generator. Whilst it mostly works these objects are not interchangable
and it can result in problems. You can see this by entries such as:

  # Warning: Failed to format sample for tenant_name
  # isinstance() arg 2 must be a class, type, or tuple of classes
and types

in the currently generated config files.

Keystoneauth provides a function that returns oslo_config options so
fetch, process and return those instead.

Change-Id: Ie3fad2381467b19189cbb332c41cea8b6cf6e264
Closes-Bug: #1548433


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548433

Title:
  neutron returns objects other than oslo_config.cfg.Opt instances from
  list_opts

Status in keystoneauth:
  Incomplete
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The neutron function for listing options for use with the
  configuration generator returns things that are not compliant with the
  oslo_config.cfg.Opt class API. At the very least this includes the
  options from keystoneauth1, but I haven't looked to find if there are
  others.

  We'll work around this for now in the configuration generator code,
  but in the future we will more strictly enforce the API compliance by
  refusing to generate a configuration file or by leaving options out of
  the output.

  The change blocked by this issue is:
  https://review.openstack.org/#/c/282435/5

  One failure log showing the issue is:
  http://logs.openstack.org/35/282435/5/check/gate-tempest-dsvm-neutron-
  src-oslo.config/77044c6/logs/devstacklog.txt.gz

  The neutron code triggering the issue is in:
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/opts.py#n279

  The best solution would be to fix keystoneauth to support option
  discovery natively using proper oslo.config Opts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystoneauth/+bug/1548433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567434] [NEW] new hypervisor should apper with available resources set to 0

2016-04-07 Thread Vasyl Saienko
Public bug reported:

If Nova tries to check resources on newly added ironic hypervisor during
instance spawning, where resources are not updated yet, instance build
failed with the following error:

http://paste.openstack.org/show/493321/
http://paste.openstack.org/show/493322/

The following operation is failed, since free_disk_space = None.

self.free_disk_mb = compute.free_disk_gb * 1024

It was reproduced on Liberty nova code.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- If nova tries to check resources on newly added ironic hypervisor, where 
resources are not updated yet.
- Instance build failed with the following error:
+ If Nova tries to check resources on newly added ironic hypervisor during
+ instance spawning, where resources are not updated yet, instance build
+ failed with the following error:
  
  http://paste.openstack.org/show/493321/
  http://paste.openstack.org/show/493322/
  
  The following operation is failed, since free_disk_space = None.
  
  self.free_disk_mb = compute.free_disk_gb * 1024
  
  It was reproduced on Liberty nova code.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567434

Title:
  new hypervisor should apper with available resources set to 0

Status in OpenStack Compute (nova):
  New

Bug description:
  If Nova tries to check resources on newly added ironic hypervisor
  during instance spawning, where resources are not updated yet,
  instance build failed with the following error:

  http://paste.openstack.org/show/493321/
  http://paste.openstack.org/show/493322/

  The following operation is failed, since free_disk_space = None.

  self.free_disk_mb = compute.free_disk_gb * 1024

  It was reproduced on Liberty nova code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567295] Re: tox -e py27 is failing

2016-04-07 Thread Ihar Hrachyshka
New fixtures release broke all neutron repos.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Importance: Critical => High

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567295

Title:
  tox -e py27 is failing

Status in networking-l2gw:
  In Progress
Status in neutron:
  In Progress

Bug description:
  unit test cases were failing due to below error

  
networking_l2gw.tests.unit.services.l2gateway.service_drivers.test_rpc_l2gw.TestL2gwRpcDriver.test_validate_connection
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"networking_l2gw/tests/unit/services/l2gateway/service_drivers/test_rpc_l2gw.py",
 line 42, in setUp
  self.plugin = rpc_l2gw.L2gwRpcDriver(self.service_plugin)
File "networking_l2gw/services/l2gateway/service_drivers/rpc_l2gw.py", 
line 62, in __init__
  self.conn.consume_in_threads()
  TypeError: fake_consume_in_threads() takes exactly 1 argument (0 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-l2gw/+bug/1567295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567446] [NEW] Utilizing Role Base Access Control for managing Multi-tenancy

2016-04-07 Thread Adam Young
Public bug reported:

After creating a new project and allocating some amount of resources, we
should be able to create a hierarchy of users like Project Manager (PM)
having complete view of the project usage, then PM should be able to
allocate resources to different sub-teams (like Dev, QA, Prod, etc),
each sub-team leads having access to their allocated resources and able
to manage the resources at their level with approval from the PM. All
the users will be AD authenticated ones.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1567446

Title:
  Utilizing Role Base Access Control for managing Multi-tenancy

Status in OpenStack Identity (keystone):
  New

Bug description:
  After creating a new project and allocating some amount of resources,
  we should be able to create a hierarchy of users like Project Manager
  (PM) having complete view of the project usage, then PM should be able
  to allocate resources to different sub-teams (like Dev, QA, Prod,
  etc), each sub-team leads having access to their allocated resources
  and able to manage the resources at their level with approval from the
  PM. All the users will be AD authenticated ones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1567446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567295] Re: tox -e py27 is failing

2016-04-07 Thread Ihar Hrachyshka
Nah, Kilo is not affected. It affects Liberty+ only because in Kilo the
version for fixtures is capped to <1.3.x.

** Tags added: liberty-backport-potential mitaka-backport-potential

** Changed in: networking-l2gw
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567295

Title:
  tox -e py27 is failing

Status in networking-l2gw:
  Invalid
Status in neutron:
  In Progress

Bug description:
  unit test cases were failing due to below error

  
networking_l2gw.tests.unit.services.l2gateway.service_drivers.test_rpc_l2gw.TestL2gwRpcDriver.test_validate_connection
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"networking_l2gw/tests/unit/services/l2gateway/service_drivers/test_rpc_l2gw.py",
 line 42, in setUp
  self.plugin = rpc_l2gw.L2gwRpcDriver(self.service_plugin)
File "networking_l2gw/services/l2gateway/service_drivers/rpc_l2gw.py", 
line 62, in __init__
  self.conn.consume_in_threads()
  TypeError: fake_consume_in_threads() takes exactly 1 argument (0 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-l2gw/+bug/1567295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567461] [NEW] Possible race when allocating local port for serial console

2016-04-07 Thread sahid
Public bug reported:

Nova binds a port to verify its availability but immediately after to
close the socket other instance can have also tested that same port
and so the method will return the same port for two different
instances.

We should to do not let that situation to happen

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: console

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567461

Title:
  Possible race when allocating local port for serial console

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Nova binds a port to verify its availability but immediately after to
  close the socket other instance can have also tested that same port
  and so the method will return the same port for two different
  instances.

  We should to do not let that situation to happen

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567472] [NEW] net_helpers.get_free_namespace_port can return used ports

2016-04-07 Thread Assaf Muller
Public bug reported:

Here's a simplification of 'get_free_namespace_port':

output = ip_wrapper.netns.execute(['ss', param])
used_ports = _get_source_ports_from_ss_output(output)  # Parses 'ss' output and 
gets all used ports, this is the problematic part
return get_unused_port(used_ports)

Here's a demonstration:
output = ip_wrapper.netns.execute(['ss', param])
print output
State  Recv-Q Send-QLocal Address:Port  Peer Address:Port 
LISTEN 0  10127.0.0.1:6640 *:* 
LISTEN 0  128   *:46675*:* 
LISTEN 0  128   *:22   *:* 
LISTEN 0  128   *:5432 *:* 
LISTEN 0  128   *:3260 *:* 
LISTEN 0  50*:3306 *:* 
ESTAB  0  36   10.0.0.202:22   10.0.0.44:45258 
ESTAB  0  0 127.0.0.1:32965127.0.0.1:4369  
ESTAB  0  010.0.0.202:22   10.0.0.44:36104 
LISTEN 0  128  :::80  :::* 
LISTEN 0  128  :::4369:::* 
LISTEN 0  128  :::22  :::* 
LISTEN 0  128  :::5432:::* 
LISTEN 0  128  :::3260:::* 
LISTEN 0  128  :::5672:::* 
ESTAB  0  0  :::127.0.0.1:4369  :::127.0.0.1:32965

used = net_helpers._get_source_ports_from_ss_output(output)
print used
 {'22', '3260', '32965', '4369', '5432', '5672', '80'}

You can see it returned '3260' but not '3306'.

This bug can impact how fullstack picks which free ports to use for
neutron-server and neutron-openvswitch-agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567472

Title:
  net_helpers.get_free_namespace_port can return used ports

Status in neutron:
  New

Bug description:
  Here's a simplification of 'get_free_namespace_port':

  output = ip_wrapper.netns.execute(['ss', param])
  used_ports = _get_source_ports_from_ss_output(output)  # Parses 'ss' output 
and gets all used ports, this is the problematic part
  return get_unused_port(used_ports)

  Here's a demonstration:
  output = ip_wrapper.netns.execute(['ss', param])
  print output
  State  Recv-Q Send-QLocal Address:Port  Peer Address:Port 
  LISTEN 0  10127.0.0.1:6640 *:*
 
  LISTEN 0  128   *:46675*:*
 
  LISTEN 0  128   *:22   *:*
 
  LISTEN 0  128   *:5432 *:*
 
  LISTEN 0  128   *:3260 *:*
 
  LISTEN 0  50*:3306 *:*
 
  ESTAB  0  36   10.0.0.202:22   
10.0.0.44:45258 
  ESTAB  0  0 127.0.0.1:32965127.0.0.1:4369 
 
  ESTAB  0  010.0.0.202:22   
10.0.0.44:36104 
  LISTEN 0  128  :::80  :::*
 
  LISTEN 0  128  :::4369:::*
 
  LISTEN 0  128  :::22  :::*
 
  LISTEN 0  128  :::5432:::*
 
  LISTEN 0  128  :::3260:::*
 
  LISTEN 0  128  :::5672:::*
 
  ESTAB  0  0  :::127.0.0.1:4369  :::127.0.0.1:32965

  used = net_helpers._get_source_ports_from_ss_output(output)
  print used
   {'22', '3260', '32965', '4369', '5432', '5672', '80'}

  You can see it returned '3260' but not '3306'.

  This bug can impact how fullstack picks which free ports to use for
  neutron-server and neutron-openvswitch-agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554631] Fix merged to nova (master)

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/290550
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=99de5fd3d9d9a53e1c6e9c185201110933db668e
Submitter: Jenkins
Branch:master

commit 99de5fd3d9d9a53e1c6e9c185201110933db668e
Author: Ryan Rossiter 
Date:   Fri Mar 11 21:09:13 2016 +

Translate OverLimit exceptions in Cinder calls

The cinder wrapper on all cinder API calls can check for the cinder
client returning OverLimit, so it can get correctly translated to
OverQuota. The OverQuota is different in volumes vs. snapshots, so they
need to be separated out into the different wrappers. But also, because
in snapshot creations, we need to catch a NotFound as a VolumeNotFound
and an OverLimit as an OverQuota for snapshots, we need to make a new
wrapper that mixes those two together for when we create snapshots.

Change-Id: Ia03f15232df71ca9a31ffbcca60f33949312a686
Partial-Bug: #1554631


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554631

Title:
  Cinder exceptions returned from nova rest api as 500 errors

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the nova volume API makes calls into the Cinder API using
  cinderclient, if cinder raises an exception like Forbidden or
  OverLimit, the nova volume api does not catch these exceptions. So
  they go up to the nova rest api, resulting in a 500 to be returned.

  Here's an example from a tempest test:

  Traceback (most recent call last):
File 
"/home/ubuntu/tempest/tempest/api/compute/volumes/test_volumes_get.py", line 
51, in test_volume_create_get_delete
  metadata=metadata)['volume']
File "/home/ubuntu/tempest/tempest/lib/services/compute/volumes_client.py", 
line 55, in create_volume
  resp, body = self.post('os-volumes', post_body)
File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 259, in 
post
  return self.request('POST', url, extra_headers, headers, body)
File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 642, in 
request
  resp, resp_body)
File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 761, in 
_error_checker
  message=message)
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  

  The volume API needs to wrap these exceptions and return the nova
  equivalent to the rest API so the appropriate return code can be
  returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562110] Re: link-lock-address allocater for DVR has a limit of 256 address pairs per node

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/297839
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7b1b8c2de57457c2ec1ed784165a3e10e24151cf
Submitter: Jenkins
Branch:master

commit 7b1b8c2de57457c2ec1ed784165a3e10e24151cf
Author: Swaminathan Vasudevan 
Date:   Fri Mar 25 12:38:13 2016 -0700

DVR: Increase the link-local address pair range

The current dvr_fip_ns.py file has FIP_LL_SUBNET configured
with a subnet prefixlen of /23 which only allows 255 pairs of
link-local addresses to be generated. If the number of routers
per-node increases beyond the 255 limit it raises an assertion.

This patch increases the link-local address cidr to be a /18
to allow for 8K routers. The new range was chosen to not
overlap with the original, allowing for in-place upgrades
without affecting existing routers.

Closes-Bug: #1562110
Change-Id: I6e11622ea9cc74b1d2428757f16aa0de504ac31a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562110

Title:
  link-lock-address allocater for DVR has a limit of 256 address pairs
  per node

Status in neutron:
  Fix Released

Bug description:
  The current 'link-local-address' allocator for DVR routers has a limit
  of 256 routers per node.

  This should be configurable and not just limited to 256 routers per
  node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567497] [NEW] resource_versions in agents state reports led to performance degradation

2016-04-07 Thread Oleg Bondarev
Public bug reported:

resource_versions were included into agent state reports recently to support 
rolling upgrades (commit 97a272a892fcf488949eeec4959156618caccae8) 
The downside is that it brought additional processing when handling state 
reports on server side: update of local resources versions cache and more 
seriously rpc casts to all other servers to do the same. 

All this led to a visible performance degradation at scale with hundreds
of agents constantly sending reports. Under load (rally test) agents may
start "blinking" which makes cluster very unstable.

Need to optimize agents notifications about resource_versions.

** Affects: neutron
 Importance: High
 Assignee: Oleg Bondarev (obondarev)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567497

Title:
  resource_versions in agents state reports led to performance
  degradation

Status in neutron:
  In Progress

Bug description:
  resource_versions were included into agent state reports recently to support 
rolling upgrades (commit 97a272a892fcf488949eeec4959156618caccae8) 
  The downside is that it brought additional processing when handling state 
reports on server side: update of local resources versions cache and more 
seriously rpc casts to all other servers to do the same. 

  All this led to a visible performance degradation at scale with
  hundreds of agents constantly sending reports. Under load (rally test)
  agents may start "blinking" which makes cluster very unstable.

  Need to optimize agents notifications about resource_versions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567502] [NEW] [ml2]/physical_network_mtus impoperly handled

2016-04-07 Thread Vladimir Eremin
Public bug reported:

We're using [ml2]/physical_network_mtus to specify mappings like this
one:

[ml2]
physical_network_mtus = physnet1:1500,physnet2:1500

In this try/except
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L37:

try:
self.physnet_mtus = utils.parse_mappings(
cfg.CONF.ml2.physical_network_mtus
)
except Exception:
self.physnet_mtus = []

if you specify the option above, neutron.common.utils.parse_mappings
fails with:

ValueError: Value 1500 in mapping: 'physnet2:1500' not unique

That's because you need to call neutron.common.utils.parse_mappings with
unique_values=False

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567502

Title:
  [ml2]/physical_network_mtus impoperly handled

Status in neutron:
  New

Bug description:
  We're using [ml2]/physical_network_mtus to specify mappings like this
  one:

  [ml2]
  physical_network_mtus = physnet1:1500,physnet2:1500

  In this try/except
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L37:

  try:
  self.physnet_mtus = utils.parse_mappings(
  cfg.CONF.ml2.physical_network_mtus
  )
  except Exception:
  self.physnet_mtus = []

  if you specify the option above, neutron.common.utils.parse_mappings
  fails with:

  ValueError: Value 1500 in mapping: 'physnet2:1500' not unique

  That's because you need to call neutron.common.utils.parse_mappings
  with unique_values=False

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567506] [NEW] Users email deletes from repeatedly clicking confirm button

2016-04-07 Thread Erik Neeley
Public bug reported:

Environment: upstream

Steps:
1) Login under admin
2) Click Identity -> Users
3) Hover over a user email and click on the edit icon
4) Repeatedly and quickly click the confirm (blue checkmark) button

Expected Result: Email change is accepted if valid or rejected if
invalid

Actual Result: Email address is deleted

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  Environment: upstream
  
  Steps:
  1) Login under admin
  2) Click Identity -> Users
  3) Hover over a user email and click on the edit icon
- 4) Repeatedly click the confirm (blue checkmark) button
+ 4) Repeatedly and quickly click the confirm (blue checkmark) button
  
  Expected Result: Email change is accepted if valid or rejected if
  invalid
  
  Actual Result: Email address is deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567506

Title:
  Users email deletes from repeatedly clicking confirm button

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Environment: upstream

  Steps:
  1) Login under admin
  2) Click Identity -> Users
  3) Hover over a user email and click on the edit icon
  4) Repeatedly and quickly click the confirm (blue checkmark) button

  Expected Result: Email change is accepted if valid or rejected if
  invalid

  Actual Result: Email address is deleted

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567507] [NEW] neutron-lbaas broken with neutron change

2016-04-07 Thread Rabi Mishra
Public bug reported:

It seems recent change
https://github.com/openstack/neutron/commit/34a328fe12950c339b8259451262470c627f2f00
has broken neutron-lbaas.

Hence all dependent projects are broken with below error in q-lbaas.

2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-0a3a7771-0f1e-4424-9b96-0b7613cc1c82 demo -] Create vip 
7c347fc8-c282-4231-aa1c-e23a0d180abb failed on device driver haproxy_ns
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 227, in create_vip
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 348, in create_vip
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 344, in _refresh_device
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager if not 
self.deploy_instance(logical_config) and self.exists(pool_id):
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager return f(*args, 
**kwargs)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 337, in deploy_instance
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 92, in create
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
logical_config['vip']['address'])
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 247, in _plug
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.plugin_rpc.plug_vip_port(port['id'])
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_api.py",
 line 58, in plug_vip_port
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager host=self.host)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 470, in send
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 461, in _send
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager raise result
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager AttributeError: 'str' 
object has no attribute 'strftime'
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.servi

[Yahoo-eng-team] [Bug 1567295] Re: tox -e py27 is failing

2016-04-07 Thread Armando Migliaccio
** No longer affects: networking-l2gw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567295

Title:
  tox -e py27 is failing

Status in neutron:
  In Progress

Bug description:
  unit test cases were failing due to below error

  
networking_l2gw.tests.unit.services.l2gateway.service_drivers.test_rpc_l2gw.TestL2gwRpcDriver.test_validate_connection
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"networking_l2gw/tests/unit/services/l2gateway/service_drivers/test_rpc_l2gw.py",
 line 42, in setUp
  self.plugin = rpc_l2gw.L2gwRpcDriver(self.service_plugin)
File "networking_l2gw/services/l2gateway/service_drivers/rpc_l2gw.py", 
line 62, in __init__
  self.conn.consume_in_threads()
  TypeError: fake_consume_in_threads() takes exactly 1 argument (0 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567023] Re: test_keepalived_respawns* functional tests raceful

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302421
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1e334e8fe1db43585154d525c0c1599ac3e2a329
Submitter: Jenkins
Branch:master

commit 1e334e8fe1db43585154d525c0c1599ac3e2a329
Author: Assaf Muller 
Date:   Wed Apr 6 14:31:24 2016 -0400

Fix keepalived functional tests

Running the tests locally I'm seeing:
http://paste.openstack.org/show/493222/

The issue in all three tests is that the keepalived manager
process spawns asynchronously (sudo -> root_helper -> keepalived).

For the 'spawns' test, it was asserting that process.alive was
True when it should have used wait_until_true.

For the respawns* tests, the test grabs process.pid before the process
has necessarily spawned. Moving process.pid to after the wait_until_true(
process.active) loop guarantees that pid will return keepalived's
process. Otherwise the 'pid' variable will be None, then when we
use utils.execute we try to kill the 'None' pid, and that doesn't seem
to work.

Change-Id: Ie77d406eaaf7f77edd4f598947999be4adf3d249
Closes-Bug: #1567023


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567023

Title:
  test_keepalived_respawns* functional tests raceful

Status in neutron:
  Fix Released

Bug description:
  Running just KeepalivedManagerTestCase locally I see the following
  three tests fail frequently:

  test_keepalived_spawn
  test_keepalived_respawns
  test_keepalived_respawn_with_unexpected_exit

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567529] [NEW] Disabled neutron quota break "Create subnet" button

2016-04-07 Thread Paul Karikh
Public bug reported:

If we set OPENSTACK_NEUTRON_NETWORK= {'enable_quotas': False}, "Create
subnet" button dissapears and we get the following error:

Error while checking action permissions.
Traceback (most recent call last):
  File "horizon/horizon/tables/base.py", line 1278, in _filter_action
return action._allowed(request, datum) and row_matched
  File "horizon/horizon/tables/actions.py", line 136, in _allowed
self.allowed(request, datum))
  File 
"horizon/openstack_dashboard/dashboards/project/networks/subnets/tables.py", 
line 103, in allowed
if usages['subnets']['available'] <= 0:
KeyError: 'available'
Error while checking action permissions.

Steps to reproduce:
1) Set 'enable_quotas': False in OPENSTACK_NEUTRON_NETWORK
2) Restart Horizon
3) Go to Project/Networks
4) Click on any network
5) Note that you've got "Delete" button, but there is no "Create" button
6) Enable neutron quotas again and make sure that "Create" button appeared


There were already patch[1] for fixing this but looks like this piece of code 
has been missed.

[1]
https://github.com/openstack/horizon/commit/5ba219acbfddb9b9308a75da361fda915ba61f37
#diff-6f73ca5e50c3e694b8147669832912baR2152

** Affects: horizon
 Importance: Undecided
 Assignee: Paul Karikh (pkarikh)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Paul Karikh (pkarikh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567529

Title:
  Disabled neutron quota break "Create subnet" button

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  If we set OPENSTACK_NEUTRON_NETWORK= {'enable_quotas': False}, "Create
  subnet" button dissapears and we get the following error:

  Error while checking action permissions.
  Traceback (most recent call last):
File "horizon/horizon/tables/base.py", line 1278, in _filter_action
  return action._allowed(request, datum) and row_matched
File "horizon/horizon/tables/actions.py", line 136, in _allowed
  self.allowed(request, datum))
File 
"horizon/openstack_dashboard/dashboards/project/networks/subnets/tables.py", 
line 103, in allowed
  if usages['subnets']['available'] <= 0:
  KeyError: 'available'
  Error while checking action permissions.

  Steps to reproduce:
  1) Set 'enable_quotas': False in OPENSTACK_NEUTRON_NETWORK
  2) Restart Horizon
  3) Go to Project/Networks
  4) Click on any network
  5) Note that you've got "Delete" button, but there is no "Create" button
  6) Enable neutron quotas again and make sure that "Create" button appeared

  
  There were already patch[1] for fixing this but looks like this piece of code 
has been missed.

  [1]
  
https://github.com/openstack/horizon/commit/5ba219acbfddb9b9308a75da361fda915ba61f37
  #diff-6f73ca5e50c3e694b8147669832912baR2152

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565028] Re: Neutron port detach isn't detected by nova event handler

2016-04-07 Thread Fahri Cihan Demirci
** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565028

Title:
  Neutron port detach isn't detected by nova event handler

Status in OpenStack Compute (nova):
  Opinion
Status in OpenStack Search (Searchlight):
  New

Bug description:
  For reasons no longer clear to me, the nova event handler listens for
  neutron port.create.end events (possibly because we receive the nova
  creation events, but port attachment follows later?). It doesn't
  capture explicit attach/detach port events.

  For example:

neutron port-create test-net  # id is ee486fc1-0919-4109-9990-a2f21b25fec7
nova interface-attach server-1 --port-id 
ee486fc1-0919-4109-9990-a2f21b25fec7

  The events received are port.update.end. For attaching, the
  device_owner and device_id are changed, although there's no indication
  of the reason for the update event.

  For detach it's even worse because we don't even know what the port
  was previously attached to. Not sure here what the right answer is; a
  detach/attach event would be ideal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1565028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567253] Re: Should be space in between two words

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302568
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=09a44827b6b794dd46eaef6200a2efd1b048
Submitter: Jenkins
Branch:master

commit 09a44827b6b794dd46eaef6200a2efd1b048
Author: bpankaj 
Date:   Thu Apr 7 13:07:26 2016 +0530

Given space in between two words.

Change-Id: Ie0df243669568b6bb89f30fb5f93b4f64b5a1230
Closes-Bug: #1567253


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1567253

Title:
  Should be space in between two words

Status in Glance:
  Fix Released

Bug description:
  There is no space in between two words.

  pankaj@pankaj-VirtualBox:~/DevStack/devstack$ glance md-tag-create --name ab 
new-ns
  404 Not Found: Metadata definition namespace=new-nswas not found. (HTTP 404).

  Expected result: should be space in between two word.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1567253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567549] [NEW] SR-IOV VF passthrough does not properly update status of parent PF upon freeing VF

2016-04-07 Thread Nikola Đipanov
Public bug reported:

Assigning an SR-IOV VF device to an instance when PFs are whitelisted
too correctly marks the PF as unavailable if one of it's VFs got
assigned. However when we delete the instance, the PF is not makred as
available.

Steps to reproduce:

1) Whitelist PFs and VFs in nova.conf (as explained in the docs) for
example

pci_passthrough_whitelist = [{"product_id":"1520",
"vendor_id":"8086", "physical_network":"phynet"}, {"product_id":"1521",
"vendor_id":"8086", "physical_network":"phynet"}] # Both pfs and vfs are
whitelisted

2) Add an alias to assign a VF pci_alias = {"name": "vf", "device_type": 
"type-VF"}
3) Set up a flavor with an alias extra_spec

$ nova flavor-key 2 set "pci_passthrough:alias"="vf:1"

4) Boot an instance with the said flavor and observe a VF being set to
'allocated' and a PF being set to 'unavailable'

select * from pci_devices where deleted=0;


5) Delete the instance from step 4 and observe that the VF has been made 
available but the PF is still 'unavailable'. Both should be back to available 
if this was the only VF used.

** Affects: nova
 Importance: High
 Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
   Importance: Medium => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567549

Title:
  SR-IOV VF passthrough does not properly update status of parent PF
  upon freeing VF

Status in OpenStack Compute (nova):
  New

Bug description:
  Assigning an SR-IOV VF device to an instance when PFs are whitelisted
  too correctly marks the PF as unavailable if one of it's VFs got
  assigned. However when we delete the instance, the PF is not makred as
  available.

  Steps to reproduce:

  1) Whitelist PFs and VFs in nova.conf (as explained in the docs) for
  example

  pci_passthrough_whitelist = [{"product_id":"1520",
  "vendor_id":"8086", "physical_network":"phynet"},
  {"product_id":"1521", "vendor_id":"8086",
  "physical_network":"phynet"}] # Both pfs and vfs are whitelisted

  2) Add an alias to assign a VF pci_alias = {"name": "vf", "device_type": 
"type-VF"}
  3) Set up a flavor with an alias extra_spec

  $ nova flavor-key 2 set "pci_passthrough:alias"="vf:1"

  4) Boot an instance with the said flavor and observe a VF being set to
  'allocated' and a PF being set to 'unavailable'

  select * from pci_devices where deleted=0;

  
  5) Delete the instance from step 4 and observe that the VF has been made 
available but the PF is still 'unavailable'. Both should be back to available 
if this was the only VF used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567608] [NEW] neutron.tests.functional.agent.windows.test_ip_lib.IpLibTestCase.test_ipwrapper_get_device_by_ip_None unstable

2016-04-07 Thread Assaf Muller
Public bug reported:

Logstash query:
build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"ValueError: You must specify a valid interface name."

4 matches in the last 7 days.

TRACE example:
http://paste.openstack.org/show/493396/

** Affects: neutron
 Importance: Low
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567608

Title:
  
neutron.tests.functional.agent.windows.test_ip_lib.IpLibTestCase.test_ipwrapper_get_device_by_ip_None
  unstable

Status in neutron:
  New

Bug description:
  Logstash query:
  build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"ValueError: You must specify a valid interface name."

  4 matches in the last 7 days.

  TRACE example:
  http://paste.openstack.org/show/493396/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567295] Re: tox -e py27 is failing due to fixtures 2.0.0 release

2016-04-07 Thread Matt Riedemann
I'm not sure why mitaka and liberty backport potential are in here, we
shouldn't be using fixtures 2.0.0 in mitaka or liberty because it's not
used in upper-constraints on those branches.

** Summary changed:

- tox -e py27 is failing 
+ tox -e py27 is failing due to fixtures 2.0.0 release

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567295

Title:
  tox -e py27 is failing due to fixtures 2.0.0 release

Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  New

Bug description:
  unit test cases were failing due to below error

  
networking_l2gw.tests.unit.services.l2gateway.service_drivers.test_rpc_l2gw.TestL2gwRpcDriver.test_validate_connection
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"networking_l2gw/tests/unit/services/l2gateway/service_drivers/test_rpc_l2gw.py",
 line 42, in setUp
  self.plugin = rpc_l2gw.L2gwRpcDriver(self.service_plugin)
File "networking_l2gw/services/l2gateway/service_drivers/rpc_l2gw.py", 
line 62, in __init__
  self.conn.consume_in_threads()
  TypeError: fake_consume_in_threads() takes exactly 1 argument (0 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535231] Re: md-meta with case insensitive string has problem when creating

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302652
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=385ffab06f1657d10a8a7284bb83945d236bd6d7
Submitter: Jenkins
Branch:master

commit 385ffab06f1657d10a8a7284bb83945d236bd6d7
Author: Pankaj Mishra 
Date:   Thu Apr 7 15:56:22 2016 +0530

Modified message of exception and log

Metadata tag names are case insensitive, that confuses
the user. So we added additional information about metadata
tag duplication to clarify the point.

Change-Id: Ib58a9d0b9cc95a831981de0cc19456f0c6713dbb
Closes-Bug: #1535231


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1535231

Title:
  md-meta with case insensitive string has problem when creating

Status in Glance:
  Fix Released

Bug description:
  [Summary]
  md-meta with case sensitive has problem when creating

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  can create case sensitive md-meta

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) there is a md-tag named "ab" for namespace "new-ns":
  stack@45-59:~/devstack$ glance md-tag-create --name ab new-ns
  ++--+
  | Property   | Value|
  ++--+
  | created_at | 2016-01-18T16:36:13Z |
  | name   | ab   |
  | updated_at | 2016-01-18T16:36:13Z |
  ++--+
  stack@45-59:~/devstack$ glance md-tag-list new-ns
  +--+
  | name |
  +--+
  | ab   |
  +--+

  
  2)if create a new md-tag named "AB", conflict occur:   >>>ISSUE
  stack@45-59:~/devstack$ glance md-tag-create --name AB new-ns
  409 Conflict: A metadata tag with name=AB already exists in namespace=new-ns.
  stack@45-59:~/devstack$ 

  3)but if there is no md-tag "ab", the md-tag "AB" can be created.
  stack@45-59:~/devstack$ glance md-tag-delete new-ns ab
  stack@45-59:~/devstack$ 
  stack@45-59:~/devstack$ glance md-tag-list new-ns
  +--+
  | name |
  +--+
  +--+
  stack@45-59:~/devstack$ glance md-tag-create --name AB new-ns
  ++--+
  | Property   | Value|
  ++--+
  | created_at | 2016-01-18T16:37:20Z |
  | name   | AB   |
  | updated_at | 2016-01-18T16:37:20Z |
  ++--+

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1535231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567613] [NEW] Functional tests logging configured incorrectly

2016-04-07 Thread Assaf Muller
Public bug reported:

Functional tests output per-test logs produced by the test runner
processes to /tmp/dsvm-functional-logs, and those files are then copied
so they're accessible when viewing logs produced by CI runs. However,
logging seems to be set incorrectly and most of the files are empty
whereas they didn't used to be.

This makes troubleshooting other functional tests CI failures more
difficult than it needs to be.

** Affects: neutron
 Importance: High
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567613

Title:
  Functional tests logging configured incorrectly

Status in neutron:
  New

Bug description:
  Functional tests output per-test logs produced by the test runner
  processes to /tmp/dsvm-functional-logs, and those files are then
  copied so they're accessible when viewing logs produced by CI runs.
  However, logging seems to be set incorrectly and most of the files are
  empty whereas they didn't used to be.

  This makes troubleshooting other functional tests CI failures more
  difficult than it needs to be.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567620] [NEW] Scripts requesting v3 get multiple choices with bad URLs

2016-04-07 Thread Ian Cordasco
Public bug reported:

Description
===

We have an old script which was requesting http://:8774/v3 and
received a 300 multiple choices, the response body looks like this
though:

{"choices": [{"status": "SUPPORTED", "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
"l│inks": [{"href": "http://127.0.0.1:8774/v2/v3";, "rel": "self"}]},
{"status": "CURRENT", "media-types": [{"base": "application/json",
"type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
"v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/v3";, "rel":
"self"}]}]}

This actually will work with anything after /, e.g., http://:8774/asd

{"choices": [{"status": "SUPPORTED", "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
"l│inks": [{"href": "http://127.0.0.1:8774/v2/asd";, "rel": "self"}]},
{"status": "CURRENT", "media-types": [{"base": "application/json",
"type": "application/vnd.o│penstack.compute+json;version=2.1"}], "id":
"v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/asd";, "rel":
"self"}]}]}

Steps to reproduce
==
A chronological list of steps which will bring off the
issue you noticed:
* I upgraded Kilo to Liberty
* then I made a request to http://:8774/v3
* then I saw a response body like above

Example code:

import requests

r = requests.get('http://:8774/v3')
print(r.status_code)
print(r.content)

Alternatively,

curl -i http://:8774/v3

Expected result
===

I would have expected to see a response like:

{"choices": [{"status": "SUPPORTED", "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
"l│inks": [{"href": "http://127.0.0.1:8774/v2";, "rel": "self"}]},
{"status": "CURRENT", "media-types": [{"base": "application/json",
"type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
"v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1";, "rel":
"self"}]}]}

E.g., http://127.0.0.1:8774/v2.1 instead of
http://127.0.0.1:8774/v2.1/v3

Actual result
=

As described above

Environment
===
1. Version: 3f217a441af6595cb2a240ab72133aff133504b6 (stable/liberty)

2. Which hypervisor did you use?
Unrelated

2. Which storage type did you use?
Unrelated

3. Which networking type did you use?
Unrelated

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567620

Title:
  Scripts requesting v3 get multiple choices with bad URLs

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  We have an old script which was requesting http://:8774/v3
  and received a 300 multiple choices, the response body looks like this
  though:

  {"choices": [{"status": "SUPPORTED", "media-types": [{"base":
  "application/json", "type":
  "application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
  "l│inks": [{"href": "http://127.0.0.1:8774/v2/v3";, "rel": "self"}]},
  {"status": "CURRENT", "media-types": [{"base": "application/json",
  "type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
  "v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/v3";, "rel":
  "self"}]}]}

  This actually will work with anything after /, e.g., http://:8774/asd

  {"choices": [{"status": "SUPPORTED", "media-types": [{"base":
  "application/json", "type":
  "application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
  "l│inks": [{"href": "http://127.0.0.1:8774/v2/asd";, "rel": "self"}]},
  {"status": "CURRENT", "media-types": [{"base": "application/json",
  "type": "application/vnd.o│penstack.compute+json;version=2.1"}], "id":
  "v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/asd";, "rel":
  "self"}]}]}

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * I upgraded Kilo to Liberty
  * then I made a request to http://:8774/v3
  * then I saw a response body like above

  Example code:

  import requests

  r = requests.get('http://:8774/v3')
  print(r.status_code)
  print(r.content)

  Alternatively,

  curl -i http://:8774/v3

  Expected result
  ===

  I would have expected to see a response like:

  {"choices": [{"status": "SUPPORTED", "media-types": [{"base":
  "application/json", "type":
  "application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
  "l│inks": [{"href": "http://127.0.0.1:8774/v2";, "rel": "self"}]},
  {"status": "CURRENT", "media-types": [{"base": "application/json",
  "type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
  "v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1";, "rel":
  "self"}]}]}

  E.g., http://127.0.0.1:8774/v2.1 instead of
  

[Yahoo-eng-team] [Bug 1567621] [NEW] Scripts requesting v3 get multiple choices with bad URLs

2016-04-07 Thread Ian Cordasco
Public bug reported:

Description
===

We have an old script which was requesting http://:8774/v3 and
received a 300 multiple choices, the response body looks like this
though:

{"choices": [{"status": "SUPPORTED", "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
"l│inks": [{"href": "http://127.0.0.1:8774/v2/v3";, "rel": "self"}]},
{"status": "CURRENT", "media-types": [{"base": "application/json",
"type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
"v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/v3";, "rel":
"self"}]}]}

This actually will work with anything after /, e.g., http://:8774/asd

{"choices": [{"status": "SUPPORTED", "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
"l│inks": [{"href": "http://127.0.0.1:8774/v2/asd";, "rel": "self"}]},
{"status": "CURRENT", "media-types": [{"base": "application/json",
"type": "application/vnd.o│penstack.compute+json;version=2.1"}], "id":
"v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/asd";, "rel":
"self"}]}]}

Steps to reproduce
==
A chronological list of steps which will bring off the
issue you noticed:
* I upgraded Kilo to Liberty
* then I made a request to http://:8774/v3
* then I saw a response body like above

Example code:

import requests

r = requests.get('http://:8774/v3')
print(r.status_code)
print(r.content)

Alternatively,

curl -i http://:8774/v3

Expected result
===

I would have expected to see a response like:

{"choices": [{"status": "SUPPORTED", "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
"l│inks": [{"href": "http://127.0.0.1:8774/v2";, "rel": "self"}]},
{"status": "CURRENT", "media-types": [{"base": "application/json",
"type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
"v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1";, "rel":
"self"}]}]}

E.g., http://127.0.0.1:8774/v2.1 instead of
http://127.0.0.1:8774/v2.1/v3

Actual result
=

As described above

Environment
===
1. Version: 3f217a441af6595cb2a240ab72133aff133504b6 (stable/liberty)

2. Which hypervisor did you use?
Unrelated

2. Which storage type did you use?
Unrelated

3. Which networking type did you use?
Unrelated

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567621

Title:
  Scripts requesting v3 get multiple choices with bad URLs

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  We have an old script which was requesting http://:8774/v3
  and received a 300 multiple choices, the response body looks like this
  though:

  {"choices": [{"status": "SUPPORTED", "media-types": [{"base":
  "application/json", "type":
  "application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
  "l│inks": [{"href": "http://127.0.0.1:8774/v2/v3";, "rel": "self"}]},
  {"status": "CURRENT", "media-types": [{"base": "application/json",
  "type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
  "v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/v3";, "rel":
  "self"}]}]}

  This actually will work with anything after /, e.g., http://:8774/asd

  {"choices": [{"status": "SUPPORTED", "media-types": [{"base":
  "application/json", "type":
  "application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
  "l│inks": [{"href": "http://127.0.0.1:8774/v2/asd";, "rel": "self"}]},
  {"status": "CURRENT", "media-types": [{"base": "application/json",
  "type": "application/vnd.o│penstack.compute+json;version=2.1"}], "id":
  "v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1/asd";, "rel":
  "self"}]}]}

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * I upgraded Kilo to Liberty
  * then I made a request to http://:8774/v3
  * then I saw a response body like above

  Example code:

  import requests

  r = requests.get('http://:8774/v3')
  print(r.status_code)
  print(r.content)

  Alternatively,

  curl -i http://:8774/v3

  Expected result
  ===

  I would have expected to see a response like:

  {"choices": [{"status": "SUPPORTED", "media-types": [{"base":
  "application/json", "type":
  "application/vnd.openstack.compute+json;version=2"}], "id": "v2.0",
  "l│inks": [{"href": "http://127.0.0.1:8774/v2";, "rel": "self"}]},
  {"status": "CURRENT", "media-types": [{"base": "application/json",
  "type": "application/vnd.op│enstack.compute+json;version=2.1"}], "id":
  "v2.1", "links": [{"href": "http://127.0.0.1:8774/v2.1";, "rel":
  "self"}]}]}

  E.g., http://127.0.0.1:8774/v2.1 instead of
  

[Yahoo-eng-team] [Bug 1567634] [NEW] tox -e py34 is not working in horizon

2016-04-07 Thread xiangxinyong
Public bug reported:

when i got the lasted horizon code and run tox -e py34.

It got a lot of errors like bellow.

It seem it is connected with novaclient.

novaclient release 3.4 version on 2016-04-08.

==
ERROR: openstack_dashboard.test.test_data.utils.load_test_data
--
Traceback (most recent call last):
  File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/nose/case.py",
 line 198, in runTest
self.test(*self.arg)
  File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 44, in load_test_data
return TestData(*loaders)
  File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 70, in __init__
data_func(self)
  File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/nova_data.py",
 line 570, in data
TEST.usages.add(usage_obj_2)
  File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 90, in add
if obj not in self._objects:
  File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/base.py",
 line 204, in __eq__
if hasattr(self, 'id') and hasattr(other, 'id'):
  File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/base.py",
 line 173, in __getattr__
self.get()
  File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/v2/usage.py",
 line 35, in get
start = oslo_utils.timeutils.parse_strtime(self.start, fmt=fmt)
  File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/oslo_utils/timeutils.py",
 line 97, in parse_strtime
return datetime.datetime.strptime(timestr, fmt)
  File "/usr/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
  File "/usr/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data '2012-01-01 00:00:00' does not match format 
'%Y-%m-%dT%H:%M:%S.%f'

Slowest 5 tests took 1.23 secs:
0.35NeutronApiTests.test_port_create_with_policy_profile
0.28VPNTests.test_add_ipsecpolicy_get
0.24
NetworkSubnetTests.test_subnet_create_post_invalid_pools_ip_network_with_subnetpool
0.19InstanceAjaxTests.test_row_update
0.16SecurityGroupsNeutronTests.test_detail_delete_rule_exception
--
Ran 1549 tests in 8.361s

FAILED (SKIP=11, errors=1538)
Destroying test database for alias 'default'...
ERROR: InvocationError: '/home/chenpengzi/timetest/horizon/.tox/py34/bin/python 
-u manage.py test --settings=openstack_dashboard.test.settings 
--exclude-dir=openstack_dashboard/test/integration_tests openstack_dashboard'
___
 summary 
___
ERROR:   py34: commands failed

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567634

Title:
  tox -e py34 is not working in horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  when i got the lasted horizon code and run tox -e py34.

  It got a lot of errors like bellow.

  It seem it is connected with novaclient.

  novaclient release 3.4 version on 2016-04-08.

  ==
  ERROR: openstack_dashboard.test.test_data.utils.load_test_data
  --
  Traceback (most recent call last):
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/nose/case.py",
 line 198, in runTest
  self.test(*self.arg)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 44, in load_test_data
  return TestData(*loaders)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 70, in __init__
  data_func(self)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/nova_data.py",
 line 570, in data
  TEST.usages.add(usage_obj_2)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 90, in add
  if obj not in self._objects:
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/base.py",
 line 204, in __eq__
  if hasattr(self, 'id') and hasattr(other, 'id'):
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/base.py",
 line 173, in __getattr__
  self.get()
   

[Yahoo-eng-team] [Bug 1567634] Re: tox -e py34 is not working in horizon

2016-04-07 Thread Brad Pokorny
This will be handled with this blueprint:
https://blueprints.launchpad.net/horizon/+spec/enhance-tox

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567634

Title:
  tox -e py34 is not working in horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  when i got the lasted horizon code and run tox -e py34.

  It got a lot of errors like bellow.

  It seem it is connected with novaclient.

  novaclient release 3.4 version on 2016-04-08.

  ==
  ERROR: openstack_dashboard.test.test_data.utils.load_test_data
  --
  Traceback (most recent call last):
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/nose/case.py",
 line 198, in runTest
  self.test(*self.arg)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 44, in load_test_data
  return TestData(*loaders)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 70, in __init__
  data_func(self)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/nova_data.py",
 line 570, in data
  TEST.usages.add(usage_obj_2)
File 
"/home/chenpengzi/timetest/horizon/openstack_dashboard/test/test_data/utils.py",
 line 90, in add
  if obj not in self._objects:
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/base.py",
 line 204, in __eq__
  if hasattr(self, 'id') and hasattr(other, 'id'):
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/base.py",
 line 173, in __getattr__
  self.get()
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/novaclient/v2/usage.py",
 line 35, in get
  start = oslo_utils.timeutils.parse_strtime(self.start, fmt=fmt)
File 
"/home/chenpengzi/timetest/horizon/.tox/py34/lib/python3.4/site-packages/oslo_utils/timeutils.py",
 line 97, in parse_strtime
  return datetime.datetime.strptime(timestr, fmt)
File "/usr/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
  tt, fraction = _strptime(data_string, format)
File "/usr/lib/python3.4/_strptime.py", line 337, in _strptime
  (data_string, format))
  ValueError: time data '2012-01-01 00:00:00' does not match format 
'%Y-%m-%dT%H:%M:%S.%f'

  Slowest 5 tests took 1.23 secs:
  0.35NeutronApiTests.test_port_create_with_policy_profile
  0.28VPNTests.test_add_ipsecpolicy_get
  0.24
NetworkSubnetTests.test_subnet_create_post_invalid_pools_ip_network_with_subnetpool
  0.19InstanceAjaxTests.test_row_update
  0.16SecurityGroupsNeutronTests.test_detail_delete_rule_exception
  --
  Ran 1549 tests in 8.361s

  FAILED (SKIP=11, errors=1538)
  Destroying test database for alias 'default'...
  ERROR: InvocationError: 
'/home/chenpengzi/timetest/horizon/.tox/py34/bin/python -u manage.py test 
--settings=openstack_dashboard.test.settings 
--exclude-dir=openstack_dashboard/test/integration_tests openstack_dashboard'
  
___
 summary 
___
  ERROR:   py34: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567655] [NEW] 500 error when trying to list instances and neutron-server is down

2016-04-07 Thread Matt Riedemann
Public bug reported:

This is a newton devstack created today running neutron + ovs:

1. create an instance, wait for it to go active
2. list instances, it's fine
3. stop all neutron-server processes
4. list instances - it fails with a 500 trying to process security groups 
because it can't connect to neutron:

http://paste.openstack.org/show/493416/

2016-04-07 20:30:54.272 ERROR nova.api.openstack 
[req-5d867a48-2097-456f-a2ae-f93c982ac5d0 admin admin] Caught error: Unable to 
establish connection to 
http://9.5.124.163:9696/v2.0/ports.json?device_id=cda8a8ac-6eac-434d-bded-a0b34d285f41
2016-04-07 20:30:54.272 TRACE nova.api.openstack Traceback (most recent call 
last):
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 134, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack return 
req.get_response(self.application)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
2016-04-07 20:30:54.272 TRACE nova.api.openstack application, 
catch_exc_info=False)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
2016-04-07 20:30:54.272 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2016-04-07 20:30:54.272 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 467, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack response = 
req.get_response(self._app)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
2016-04-07 20:30:54.272 TRACE nova.api.openstack application, 
catch_exc_info=False)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
2016-04-07 20:30:54.272 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 141, in 
__call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2016-04-07 20:30:54.272 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 672, in __call__
2016-04-07 20:30:54.272 TRACE nova.api.openstack content_type, body, accept)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 756, in _process_stack
2016-04-07 20:30:54.272 TRACE nova.api.openstack request, action_args)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 619, in 
post_process_extensions
2016-04-07 20:30:54.272 TRACE nova.api.openstack **action_args)
2016-04-07 20:30:54.272 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/security_groups.py", line 491, in 
detail
2016-04-07 20:30:54.272 TRACE nova.api.ope

[Yahoo-eng-team] [Bug 1567668] [NEW] Functional job sometimes hits global 2 hour limit and fails

2016-04-07 Thread Assaf Muller
Public bug reported:

Here's an example:
http://logs.openstack.org/13/302913/1/check/gate-neutron-dsvm-functional/91dd537/console.html

Logstash query:
build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"Killed  timeout -s 9"

45 hits in the last 7 days.

Ihar and I checked the timing, and it started happening as we merged:
https://review.openstack.org/#/c/298056/

There's a few problems here:
1) It appears like a test is freezing up. We have a per-test timeout defined. 
The timeout is defined by OS_TEST_TIMEOUT in tox.ini, and is enforced via a 
fixtures.Timeout fixture set up in the oslotest base class. It looks like that 
timeout doesn't always work.
2) When the global 2 hours job timeout is hit, it doesn't perform post-tests 
tasks such as copying over log files, which makes these problems a lot harder 
to troubleshoot.
3) And of course, there is some sort of issue with likely 
https://review.openstack.org/#/c/298056/.

We can fix via a revert, which will increase the failure rate of
fullstack. Since I've been unable to reproduce this issue locally, I'd
like to hold off on a revert and try to get some more information by
tackling some combination of problems 1 and 2, and then adding more
logging to figure it out.

** Affects: neutron
 Importance: High
 Status: New


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567668

Title:
  Functional job sometimes hits global 2 hour limit and fails

Status in neutron:
  New

Bug description:
  Here's an example:
  
http://logs.openstack.org/13/302913/1/check/gate-neutron-dsvm-functional/91dd537/console.html

  Logstash query:
  build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"Killed  timeout -s 9"

  45 hits in the last 7 days.

  Ihar and I checked the timing, and it started happening as we merged:
  https://review.openstack.org/#/c/298056/

  There's a few problems here:
  1) It appears like a test is freezing up. We have a per-test timeout defined. 
The timeout is defined by OS_TEST_TIMEOUT in tox.ini, and is enforced via a 
fixtures.Timeout fixture set up in the oslotest base class. It looks like that 
timeout doesn't always work.
  2) When the global 2 hours job timeout is hit, it doesn't perform post-tests 
tasks such as copying over log files, which makes these problems a lot harder 
to troubleshoot.
  3) And of course, there is some sort of issue with likely 
https://review.openstack.org/#/c/298056/.

  We can fix via a revert, which will increase the failure rate of
  fullstack. Since I've been unable to reproduce this issue locally, I'd
  like to hold off on a revert and try to get some more information by
  tackling some combination of problems 1 and 2, and then adding more
  logging to figure it out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-04-07 Thread Richard Theis
This was fixed in openstackclient release 2.2.0.

** Changed in: python-openstackclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  Fix Released
Status in neutron:
  Fix Released
Status in octavia:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Confirmed

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567694] [NEW] nova's neutron client auth_uri uses admin

2016-04-07 Thread Andrew Woodward
Public bug reported:

looking at default config from various projects, including nova's own CI

/etc/nova/nova.conf:
[neutron]
auth_url = http://localhost/35357/v3

however when compared to other projects, they use the non-admin keystone
port (5000) and the auth version(v3) for auth.

It is confusing if this is necessary because the client needs access to
the keystone admin api's or if we are simply just holding over some old
config lore.

Can we document what the actual requirement for this url is?
is is only for auth?
does it really need the keystone admin port?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567694

Title:
  nova's neutron client auth_uri uses admin

Status in OpenStack Compute (nova):
  New

Bug description:
  looking at default config from various projects, including nova's own
  CI

  /etc/nova/nova.conf:
  [neutron]
  auth_url = http://localhost/35357/v3

  however when compared to other projects, they use the non-admin
  keystone port (5000) and the auth version(v3) for auth.

  It is confusing if this is necessary because the client needs access
  to the keystone admin api's or if we are simply just holding over some
  old config lore.

  Can we document what the actual requirement for this url is?
  is is only for auth?
  does it really need the keystone admin port?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565698] Re: novncproxy missed the vnc section in cli options

2016-04-07 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
 Assignee: (unassigned) => Allen Gao (wanlong-gao)

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Tags removed: mitaka-backport-potential
** Tags added: console

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565698

Title:
  novncproxy missed the vnc section in cli options

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  $ nova-novncproxy --help
  usage: nova-novncproxy [-h] [--cert CERT] [--config-dir DIR]
 [--config-file PATH] [--daemon] [--debug] [--key KEY]
 [--log-config-append PATH]
 [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR]
 [--log-file PATH] [--nodaemon] [--nodebug] [--norecord]
 [--nosource_is_ipv6] [--nossl_only] [--nouse-syslog]
 [--noverbose] [--nowatch-log-file] [--record]
 [--source_is_ipv6] [--ssl_only]
 [--syslog-log-facility SYSLOG_LOG_FACILITY]
 [--use-syslog] [--verbose] [--version]
 [--watch-log-file] [--web WEB]
 [--remote_debug-host REMOTE_DEBUG_HOST]
 [--remote_debug-port REMOTE_DEBUG_PORT]

  optional arguments:
-h, --helpshow this help message and exit
--cert CERT   SSL certificate file
--config-dir DIR  Path to a config directory to pull *.conf files from.
  This file set is sorted, so as to provide a
  predictable parse order if individual options are
  over-ridden. The set is parsed after the file(s)
  specified via previous --config-file, arguments hence
  over-ridden options in the directory take precedence.
--config-file PATHPath to a config file to use. Multiple config files
  can be specified, with values in later files taking
  precedence. Defaults to None.
--daemon  Become a daemon (background process)
--debug, -d   If set to true, the logging level will be set to DEBUG
  instead of the default INFO level.
--key KEY SSL key file (if separate from cert)
--log-config-append PATH, --log_config PATH
  The name of a logging configuration file. This file is
  appended to any existing logging configuration files.
  For details about logging configuration files, see the
  Python logging module documentation. Note that when
  logging configuration files are used then all logging
  configuration is set in the configuration file and
  other logging configuration options are ignored (for
  example, logging_context_format_string).
--log-date-format DATE_FORMAT
  Defines the format string for %(asctime)s in log
  records. Default: None . This option is ignored if
  log_config_append is set.
--log-dir LOG_DIR, --logdir LOG_DIR
  (Optional) The base directory used for relative
  log_file paths. This option is ignored if
  log_config_append is set.
--log-file PATH, --logfile PATH
  (Optional) Name of log file to send logging output to.
  If no default is set, logging will go to stderr as
  defined by use_stderr. This option is ignored if
  log_config_append is set.
--nodaemonThe inverse of --daemon
--nodebug The inverse of --debug
--norecordThe inverse of --record
--nosource_is_ipv6The inverse of --source_is_ipv6
--nossl_only  The inverse of --ssl_only
--nouse-syslogThe inverse of --use-syslog
--noverbose   The inverse of --verbose
--nowatch-log-fileThe inverse of --watch-log-file
--record  Record sessions to FILE.[session_number]
--source_is_ipv6  Source is ipv6
--ssl_onlyDisallow non-encrypted connections
--syslog-log-facility SYSLOG_LOG_FACILITY
  Syslog facility to receive log lines. This option is
  ignored if log_config_append is set.
--use-syslog  Use syslog for logging. Existing s

[Yahoo-eng-team] [Bug 1557584] Re: Broken retry mechanism for 'nova image-list'

2016-04-07 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
 Assignee: (unassigned) => Diana Clarke (diana-clarke)

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/liberty
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557584

Title:
  Broken retry mechanism for 'nova image-list'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  You can configure a list of glance API servers in nova.conf like so:

  [glance]
  api_servers=http://192.168.122.30:9292/v1,http://192.168.122.31:9292/v1
  num_retries = 5

  When a call to one of the glance api servers fails, nova typically
  retries the call on one of the others. This is not the case for 'nova
  image-list'.

  The retry mechanism is here:

  
https://github.com/openstack/nova/blob/83261f3106a8bdde38d258a74da777add4956290/nova/image/glance.py#L249

  In the case of 'nova image-list', glanceclient returns a python
  generator rather than an actual list of images. Because a generator is
  returned, an exception will never be raised there, so the retry
  mechanism is never executed.

  https://github.com/openstack/python-
  
glanceclient/blob/d59e341a4cd99a8488d5cf41052d9b218379ac87/glanceclient/v1/images.py#L268

  This bug was originally reported downstream:
  https://bugzilla.redhat.com/show_bug.cgi?id=1313254

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1557584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567181] Re: Request release for networking-fujitsu for stable/mitaka

2016-04-07 Thread Yushiro FURUKAWA
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567181

Title:
  Request release for networking-fujitsu for stable/mitaka

Status in networking-fujitsu:
  New
Status in neutron:
  New

Bug description:
  Please release stable/mitaka branch of networking-fujitsu.

  tag: 2.0.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-fujitsu/+bug/1567181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527925] Re: glanceclient.exc.HTTPInternalServerError when running nova image-list

2016-04-07 Thread suntao
** Changed in: glance
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527925

Title:
  glanceclient.exc.HTTPInternalServerError when running nova image-list

Status in Glance:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I am following this guide verbatim: http://docs.openstack.org/liberty
  /install-guide-rdo/nova-verify.html (on CentOS 7.1).

  When I run `nova image-list` I get the following output:

  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-d4a1d4b7-3bfd-46b5-bda7-37631106b839)

  All other commands so far has worked fine. I can see services in nova
  and upload images in glance.

  Logs attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1527925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567295] Re: tox -e py27 is failing due to fixtures 2.0.0 release

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302997
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2af86b8f6f749bf7b42a2c04b48c9a2dc28a46c9
Submitter: Jenkins
Branch:master

commit 2af86b8f6f749bf7b42a2c04b48c9a2dc28a46c9
Author: Ihar Hrachyshka 
Date:   Thu Apr 7 19:15:01 2016 +0200

Switched from fixtures to mock to mock out starting RPC consumers

fixtures 2.0.0 broke us wildly, so instead of trying to make it work
with new fixtures, I better just switch the mock to... mock.

Change-Id: I58d7a750e263e4af54589ace07ac00bec34b553a
Closes-Bug: #1567295


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567295

Title:
  tox -e py27 is failing due to fixtures 2.0.0 release

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  unit test cases were failing due to below error

  
networking_l2gw.tests.unit.services.l2gateway.service_drivers.test_rpc_l2gw.TestL2gwRpcDriver.test_validate_connection
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"networking_l2gw/tests/unit/services/l2gateway/service_drivers/test_rpc_l2gw.py",
 line 42, in setUp
  self.plugin = rpc_l2gw.L2gwRpcDriver(self.service_plugin)
File "networking_l2gw/services/l2gateway/service_drivers/rpc_l2gw.py", 
line 62, in __init__
  self.conn.consume_in_threads()
  TypeError: fake_consume_in_threads() takes exactly 1 argument (0 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567743] [NEW] Quotas tests are not covered enough at each resources.

2016-04-07 Thread Maho Koshiya
Public bug reported:

The quotas tests in the case of create resources over limit value are
not covered enough in unit tests/functional tests.

A part of the quota test of the network already exists.
But these tests do not exist - subnet/port/router/security group/security group 
rule/floatingip.

These tests are necessary for it avoid that the user can create
resources by mistake.

** Affects: neutron
 Importance: Undecided
 Assignee: Maho Koshiya (koshiya-maho)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Maho Koshiya (koshiya-maho)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567743

Title:
  Quotas tests are not covered enough at each resources.

Status in neutron:
  In Progress

Bug description:
  The quotas tests in the case of create resources over limit value are
  not covered enough in unit tests/functional tests.

  A part of the quota test of the network already exists.
  But these tests do not exist - subnet/port/router/security group/security 
group rule/floatingip.

  These tests are necessary for it avoid that the user can create
  resources by mistake.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564745] Re: VPNaaS: connection terminate with error when multiple subnets used

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300707
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=19172b3be2482cac22bc37447332fc8b7eb19bcd
Submitter: Jenkins
Branch:master

commit 19172b3be2482cac22bc37447332fc8b7eb19bcd
Author: zhuyijing 
Date:   Fri Apr 1 12:00:43 2016 -0700

OpenSwan: handle disconnect properly for multiple subnets

When mutiple subnets configured in one connection thru endpoint group.
the connection name suffix shown in ipsec status is not always as 0x1
but something like 08d11cfb-dc15-43e2-aee3-c2c71e6ae8e3/1x1 and 1x2 etc.
In this patch, we get the exact connection names from the status output
and then terminate them one by one in a loop.

Closes-Bug: #1564745
Change-Id: I2fa4eb7a7df1500b628abc31f89491ef61deb464


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564745

Title:
  VPNaaS: connection terminate with error when multiple subnets used

Status in neutron:
  Fix Released

Bug description:
  I used the latest VPNaaS from master branch with devstack ubuntu. openswan as 
the backend.
  And I configured the connections with 2 local subnets and 2 peer subnets thru 
endpoint group.

  Here is the endpoint group I configured:
  stack@VPN-dev-nick:~$ neutron vpn-endpoint-group-list 
  
  
+--+---++---+
  | id   | name  | type   | 
endpoints |
  
+--+---++---+
  | 322b98ac-4552-442b-b387-ecfecd621959 | vpn1-endgrp-local | subnet | 
[u'476eccb0-1682-4f13-a303-fee15d95cf7c', |
  |  |   || 
u'9b161125-2cfc-4716-ad68-66d00aa58af6']  |
  | 8e12066d-e28f-4121-be52-3b52bd990f6d | vpn1-endgrp-peer  | cidr   | 
[u'192.168.2.0/24', u'192.168.20.0/24']   |
  
+--+---++---+

  Then when I tried to delete the connection, in the vpn-agent log, I found the 
following error:
  2016-04-01 01:15:19.042 ERROR neutron.agent.linux.utils 
[req-c28d1b69-f997-40a4-8a7c-f275f3453bc4 admin 
f7f28249a58f40a2bd0db70bff773ab1] Exit code: 21; Stdin: ; Stdout: 021 no 
connection named "866fb1ec-d30c-4263-b99d-8921857c3e14/0x1"
  000 terminating all conns with 
alias='866fb1ec-d30c-4263-b99d-8921857c3e14/0x1' 
  021 no connection named "866fb1ec-d30c-4263-b99d-8921857c3e14/0x1"
  ; Stderr: 
  2016-04-01 01:15:19.042 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec 
[req-c28d1b69-f997-40a4-8a7c-f275f3453bc4 admin 
f7f28249a58f40a2bd0db70bff773ab1] Failed to disable vpn process on router 
cf6a9ec9-0875-4b99-8bdf-978b508ed835
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Traceback (most recent call 
last):
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 303, in disable
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec self.stop()
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 630, in stop
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec self.disconnect()
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 624, in disconnect
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec '--terminate'
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 396, in _execute
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec 
extra_ok_codes=extra_ok_codes)
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 878, in execute
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec 
log_fail_as_error=log_fail_as_error, **kwargs)
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2016-04-01 01:15:19.042 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec raise RuntimeError(msg)
  2016-04-

[Yahoo-eng-team] [Bug 1567472] Re: net_helpers.get_free_namespace_port can return used ports

2016-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302913
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d5ae8645ccfe12fb2fd99f412821ad1556e06191
Submitter: Jenkins
Branch:master

commit d5ae8645ccfe12fb2fd99f412821ad1556e06191
Author: Assaf Muller 
Date:   Thu Apr 7 10:21:46 2016 -0400

Fix regexp for ss output

Previously we ignored when destination address was replaced by
asterisk.

Co-Authored-By: amul...@redhat.com

Change-Id: I68bbd43c1b98e5f21d2cbf7dc33a8bdccbb70bd0
Closes-Bug: 1567472


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567472

Title:
  net_helpers.get_free_namespace_port can return used ports

Status in neutron:
  Fix Released

Bug description:
  Here's a simplification of 'get_free_namespace_port':

  output = ip_wrapper.netns.execute(['ss', param])
  used_ports = _get_source_ports_from_ss_output(output)  # Parses 'ss' output 
and gets all used ports, this is the problematic part
  return get_unused_port(used_ports)

  Here's a demonstration:
  output = ip_wrapper.netns.execute(['ss', param])
  print output
  State  Recv-Q Send-QLocal Address:Port  Peer Address:Port 
  LISTEN 0  10127.0.0.1:6640 *:*
 
  LISTEN 0  128   *:46675*:*
 
  LISTEN 0  128   *:22   *:*
 
  LISTEN 0  128   *:5432 *:*
 
  LISTEN 0  128   *:3260 *:*
 
  LISTEN 0  50*:3306 *:*
 
  ESTAB  0  36   10.0.0.202:22   
10.0.0.44:45258 
  ESTAB  0  0 127.0.0.1:32965127.0.0.1:4369 
 
  ESTAB  0  010.0.0.202:22   
10.0.0.44:36104 
  LISTEN 0  128  :::80  :::*
 
  LISTEN 0  128  :::4369:::*
 
  LISTEN 0  128  :::22  :::*
 
  LISTEN 0  128  :::5432:::*
 
  LISTEN 0  128  :::3260:::*
 
  LISTEN 0  128  :::5672:::*
 
  ESTAB  0  0  :::127.0.0.1:4369  :::127.0.0.1:32965

  used = net_helpers._get_source_ports_from_ss_output(output)
  print used
   {'22', '3260', '32965', '4369', '5432', '5672', '80'}

  You can see it returned '3260' but not '3306'.

  This bug can impact how fullstack picks which free ports to use for
  neutron-server and neutron-openvswitch-agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529836] Re: Fix deprecated library function (os.popen()).

2016-04-07 Thread Shu Muto
** Also affects: zaqar-ui
   Importance: Undecided
   Status: New

** Changed in: zaqar-ui
   Status: New => In Progress

** Changed in: zaqar-ui
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529836

Title:
  Fix deprecated library function (os.popen()).

Status in bilean:
  In Progress
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in ceilometer-powervm:
  Fix Released
Status in Cinder:
  Fix Released
Status in congress:
  In Progress
Status in devstack:
  In Progress
Status in Glance:
  In Progress
Status in glance_store:
  Fix Released
Status in group-based-policy-specs:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in horizon-cisco-ui:
  Confirmed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kwapi:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-powervm:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  Fix Released
Status in Zaqar-ui:
  In Progress

Bug description:
  Deprecated library function os.popen is still in use at some places.
  Need to replace it using subprocess module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bilean/+bug/1529836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp