[Yahoo-eng-team] [Bug 1506089] [NEW] Nova incorrectly calculates service version

2015-10-14 Thread Dan Smith
Public bug reported:

Nova will incorrectly calculate the service version from the database,
resulting in improper upgrade decisions like automatic compute rpc
version pinning.

For a dump that looks like this:

2015-10-13 23:53:15.824 | created_atupdated_at  deleted_at  id  
hostbinary  topic   report_countdisableddeleted disabled_reason 
last_seen_upforced_down version
2015-10-13 23:53:15.824 | 2015-10-13 23:42:34   2015-10-13 23:50:39 NULL
1   devstack-trusty-hpcloud-b2-5398906  nova-conductor  conductor   
49  0   0   NULL2015-10-13 23:50:39 0   2
2015-10-13 23:53:15.824 | 2015-10-13 23:42:34   2015-10-13 23:50:39 NULL
2   devstack-trusty-hpcloud-b2-5398906  nova-cert   cert49  
0   0   NULL2015-10-13 23:50:39 0   2
2015-10-13 23:53:15.824 | 2015-10-13 23:42:34   2015-10-13 23:50:39 NULL
3   devstack-trusty-hpcloud-b2-5398906  nova-scheduler  scheduler   
49  0   0   NULL2015-10-13 23:50:39 0   2
2015-10-13 23:53:15.824 | 2015-10-13 23:42:34   2015-10-13 23:50:40 NULL
4   devstack-trusty-hpcloud-b2-5398906  nova-computecompute 49  
0   0   NULL2015-10-13 23:50:40 0   2
2015-10-13 23:53:15.824 | 2015-10-13 23:42:44   2015-10-13 23:50:39 NULL
5   devstack-trusty-hpcloud-b2-5398906  nova-networknetwork 48  
0   0   NULL2015-10-13 23:50:39 0   2

Where all versions are 2, this is displayed in logs that load the
compute rpcapi module:

2015-10-13 23:56:05.149 INFO nova.compute.rpcapi [req-
d3601f93-73a2-4427-91d0-bb5964002592 None None] Automatically selected
compute RPC version 4.0 from minimum service version 0

Which is clearly wrong (service_version minimum should be 2 not 0)

** Affects: nova
 Importance: Medium
 Assignee: Dan Smith (danms)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506089

Title:
  Nova incorrectly calculates service version

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Nova will incorrectly calculate the service version from the database,
  resulting in improper upgrade decisions like automatic compute rpc
  version pinning.

  For a dump that looks like this:

  2015-10-13 23:53:15.824 | created_at  updated_at  deleted_at  id  
hostbinary  topic   report_countdisableddeleted disabled_reason 
last_seen_upforced_down version
  2015-10-13 23:53:15.824 | 2015-10-13 23:42:34 2015-10-13 23:50:39 NULL
1   devstack-trusty-hpcloud-b2-5398906  nova-conductor  conductor   
49  0   0   NULL2015-10-13 23:50:39 0   2
  2015-10-13 23:53:15.824 | 2015-10-13 23:42:34 2015-10-13 23:50:39 NULL
2   devstack-trusty-hpcloud-b2-5398906  nova-cert   cert49  
0   0   NULL2015-10-13 23:50:39 0   2
  2015-10-13 23:53:15.824 | 2015-10-13 23:42:34 2015-10-13 23:50:39 NULL
3   devstack-trusty-hpcloud-b2-5398906  nova-scheduler  scheduler   
49  0   0   NULL2015-10-13 23:50:39 0   2
  2015-10-13 23:53:15.824 | 2015-10-13 23:42:34 2015-10-13 23:50:40 NULL
4   devstack-trusty-hpcloud-b2-5398906  nova-computecompute 49  
0   0   NULL2015-10-13 23:50:40 0   2
  2015-10-13 23:53:15.824 | 2015-10-13 23:42:44 2015-10-13 23:50:39 NULL
5   devstack-trusty-hpcloud-b2-5398906  nova-networknetwork 48  
0   0   NULL2015-10-13 23:50:39 0   2

  Where all versions are 2, this is displayed in logs that load the
  compute rpcapi module:

  2015-10-13 23:56:05.149 INFO nova.compute.rpcapi [req-
  d3601f93-73a2-4427-91d0-bb5964002592 None None] Automatically selected
  compute RPC version 4.0 from minimum service version 0

  Which is clearly wrong (service_version minimum should be 2 not 0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506092] [NEW] Count all network-agent bindings during scheduling

2015-10-14 Thread Eugene Nikanorov
Public bug reported:

Currently the code in DHCP agent scheduler counts only active agents that host 
network.
In such case it may allow more agents to host the network than it is configured.

This is creates possibility of race condition when several DHCP agents start up 
at the same time and try to get active networks.
The network gets hosted by several agents eventhough it might already be hosted 
by other agents.
This just wastes ports/fixed ips from tenant's network range and increases load 
on controllers.

It's better to let rescheduling mechanism to sort out active/dead agents
for each of networks.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506092

Title:
  Count all network-agent bindings during scheduling

Status in neutron:
  New

Bug description:
  Currently the code in DHCP agent scheduler counts only active agents that 
host network.
  In such case it may allow more agents to host the network than it is 
configured.

  This is creates possibility of race condition when several DHCP agents start 
up at the same time and try to get active networks.
  The network gets hosted by several agents eventhough it might already be 
hosted by other agents.
  This just wastes ports/fixed ips from tenant's network range and increases 
load on controllers.

  It's better to let rescheduling mechanism to sort out active/dead
  agents for each of networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506127] [NEW] enable vhost-user support with neutron ovs agent

2015-10-14 Thread sean mooney
Public bug reported:

In the kilo cycle vhost-user support was added to nova and supported out of 
tree 
via the networking-ovs-dpdk ml2 driver and l2 agent on stack forge.

in liberty agent modification were up streamed to enable the standard 
neutron openvswitch agent to manage the netdev datapath.

in mitika it is desirable to remove all dependence on the networking-ovs-dpdk 
repo
and enable the standard ovs ml2 driver to support vhost-user on enabled 
vswitches.

to enable vhost-user support the following changes are proposed to the
neutron openvswitch agent and ml2 mechanism driver.

AGENT CHANGES:

To determine if a vswitch supports vhost user interface two pieces of 
information are required
the bridge datapath_type and the list of supported interfaces form the ovsdb.
the datapath_type feild is require to ensure the node is configured to used the 
dpdk enabled netdev datapath.

the supported interfaces types field in the ovsdb contains  a list of all 
supported interface types for all supported datapath_types.
 if the ovs-vswitchd process has been compiled with supported for dpdk 
interface but not started with dpdk enabled , dpdk interfaces will be omitted 
from this list.

the ovs neutron agent will be extended to query supported interfaces
parameter in the ovsdb and append it to the configuration section of the
agent state report.  the ovs neutron agent will be extended to append
the configured datapath_type to the  configuration section of the agent
state report. The OVS lib will be extended to retrieve the supported
interfaces from the ovsdb.

ML2 DRIVER CHANGES:

the ovs ml2 agent will be extended to consult the agent configuration
when selecting the vif type and vif binding details to install.

if the datapath is netdev and the supported interface types contains
vhost-user it will be enabled. in all other cases it will fall back to
the current behavior.  this mechanism will allow easy extension of the
ovs neutron agent to support other ovs interfaces type in the future if
enabled in nova.

** Affects: neutron
 Importance: Undecided
 Assignee: sean mooney (sean-k-mooney)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506127

Title:
  enable vhost-user support with neutron ovs agent

Status in neutron:
  New

Bug description:
  In the kilo cycle vhost-user support was added to nova and supported out of 
tree 
  via the networking-ovs-dpdk ml2 driver and l2 agent on stack forge.

  in liberty agent modification were up streamed to enable the standard 
  neutron openvswitch agent to manage the netdev datapath.

  in mitika it is desirable to remove all dependence on the networking-ovs-dpdk 
repo
  and enable the standard ovs ml2 driver to support vhost-user on enabled 
vswitches.

  to enable vhost-user support the following changes are proposed to the
  neutron openvswitch agent and ml2 mechanism driver.

  AGENT CHANGES:

  To determine if a vswitch supports vhost user interface two pieces of 
information are required
  the bridge datapath_type and the list of supported interfaces form the ovsdb.
  the datapath_type feild is require to ensure the node is configured to used 
the dpdk enabled netdev datapath.

  the supported interfaces types field in the ovsdb contains  a list of all 
supported interface types for all supported datapath_types.
   if the ovs-vswitchd process has been compiled with supported for dpdk 
interface but not started with dpdk enabled , dpdk interfaces will be omitted 
from this list.

  the ovs neutron agent will be extended to query supported interfaces
  parameter in the ovsdb and append it to the configuration section of
  the agent state report.  the ovs neutron agent will be extended to
  append the configured datapath_type to the  configuration section of
  the agent state report. The OVS lib will be extended to retrieve the
  supported interfaces from the ovsdb.

  ML2 DRIVER CHANGES:

  the ovs ml2 agent will be extended to consult the agent configuration
  when selecting the vif type and vif binding details to install.

  if the datapath is netdev and the supported interface types contains
  vhost-user it will be enabled. in all other cases it will fall back to
  the current behavior.  this mechanism will allow easy extension of the
  ovs neutron agent to support other ovs interfaces type in the future
  if enabled in nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505819] Re: HyperVDriver fails to initialize due to Linux specific import

2015-10-14 Thread Claudiu Belu
Already addressed by: https://review.openstack.org/#/c/234696

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505819

Title:
  HyperVDriver fails to initialize due to Linux specific import

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Commit https://review.openstack.org/#/c/209627/6 introduced a new
  import into nova.virt.images, which is also used by the Nova Hyper-V
  Driver. The mentioned import is called "resources", which is Linux
  specific:

  >>> import resource
  >>> resource.__file__
  '/usr/lib/python2.7/lib-dynload/resource.x86_64-linux-gnu.so'

  This also affects the Hyper-V CI, as the nova-compute service cannot
  start on Hyper-V, as this import cannot be found.

  LOG: http://64.119.130.115/neutron/225319/11/Hyper-
  V_logs/c2-r17-u13/process_error.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505932] Re: neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors fails to create a load balancer

2015-10-14 Thread Armando Migliaccio
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505932

Title:
  
neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors
  fails to create a load balancer

Status in neutron:
  In Progress

Bug description:
  http://logs.openstack.org/43/234343/1/check/gate-neutron-lbaasv2-dsvm-
  minimal/b767acd/logs/testr_results.html.gz

  ft1.1: setUpClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 103, in setUpClass
  super(BaseTestCase, cls).setUpClass()
File "neutron_lbaas/tests/tempest/lib/test.py", line 272, in setUpClass
  six.reraise(etype, value, trace)
File "neutron_lbaas/tests/tempest/lib/test.py", line 265, in setUpClass
  cls.resource_setup()
File 
"neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py", line 
45, in resource_setup
  vip_subnet_id=cls.subnet.get('id'))
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 120, in 
_create_load_balancer
  raise Exception(_("Failed to create load balancer..."))
  Exception: Failed to create load balancer...

  No exceptions in server log, probably test issue. The test hides the
  actual error from us, so all we have right now is that 'Failed to
  create load balancer...' message that is too vague to understand what
  failed. The test should not hide details on failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504260] Re: locations not ‘deleted’ on delete-all image.locations

2015-10-14 Thread Venkatesh Sampath
Was digging into the details for sometime now.

Looks like, 
In the non-test environment, as @wangxiyuan mentioned above, the deletion of 
all locations are handled by,
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L276
and eventually by,
https://github.com/openstack/glance/blob/master/glance/common/store_utils.py#L54
using ImageProxy & ImageLocationProxy.
Hence for delete locations, probably we never reach here:
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L799
That answers why the problem is not reproducible.

In test environment (functional), for some reason, the delete all locations 
goes through
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L799
and somehow skips the call via,
https://github.com/openstack/glance/blob/master/glance/common/store_utils.py#L54
 
Even though I am convinced it is a bug, unfortunately, I couldn't find a 
scenario where the delete all locations call would go through 
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L799
in real time. 

But, still curious to look into the details of what's happening behind the 
scenes while running tests.  Will continue doing that.
If I find anything substantial will get back to reopen this bug. 

For now, closing this bug.

Thanks for reviewing the patchset and getting back with your valuable
feedback.





** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1504260

Title:
  locations not ‘deleted’ on delete-all image.locations

Status in Glance:
  Invalid

Bug description:
  Problem:
  When we try to delete all locations for an ‘active’ image, the locations are 
not actually ‘deleted’ despite the image status being updated as ‘queued’.
  Note: This behavior could be noticed only when we delete-all locations.

  Scenario:
   - Have an active image with its associated locations.
   - Try updating the image to delete all locations with replace locations call 
and ensure the response status code is 200 and with image status as ‘queued’.
eg., json: {‘op': 'replace', 'path': '/locations', 'value': []}; url: 
/v2/images/%(image_id); http method: patch
   - Make a call to get the image by ID. 
 Note: The obtained image details will still continue to have the list of 
locations.

  Reason:
   - The update call to delete-all locations is ignored due to incorrect python 
‘falsy’ conditional check (at glance.sqlalchemy.api layer) that 
allows/disallows update of image locations.

  Fix:
   - Make the conditional check explicit so that the delete-all locations call 
for the given image is allowed to update the image locations as 'deleted'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1504260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog

2015-10-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/135489
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=6189b0a69505a3a97fc559113694b3b5a6816257
Submitter: Jenkins
Branch:master

commit 6189b0a69505a3a97fc559113694b3b5a6816257
Author: Telles Nobrega 
Date:   Tue Nov 18 23:22:17 2014 -0300

Replacing data_processing with data-processing

Changing data_processing with data-processing where needed to match the
new defined endpoint of Sahara.

Co-Authored-By: Sergey Reshetnyak 
Co-Authored-By: Yaroslav Lobankov 

Closes-Bug: #1356053

Change-Id: Iba45c15ed57d43a11e6fac74d75d6b2db46f6a2f


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1356053

Title:
  Doesn't properly get keystone endpoint when Keystone is configured to
  use templated catalog

Status in devstack:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Python client library for Sahara:
  Fix Released
Status in Sahara:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  When using the keystone static catalog file to register endpoints 
(http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog),
 an endpoint registered (correctly) as catalog.region.data_processing gets 
read as "data-processing" by keystone.
  Thus, when Sahara looks for an endpoint, it is unable to find one for 
data_processing.

  This causes a problem with the commandline interface and the
  dashboard.

  Keystone seems to be converting underscores to dashes here:
  
https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47

  modifying this line to not perform the replacement seems to work fine
  for me, but may have unintended consequences.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189909] Re: dhcp-agent does always provide IP address for instances with re-cycled IP addresses.

2015-10-14 Thread Ryan Moats
** Package changed: quantum (CentOS) => centos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1189909

Title:
  dhcp-agent does always provide IP address for instances with re-cycled
  IP addresses.

Status in neutron:
  Fix Released
Status in quantum package in Ubuntu:
  Confirmed
Status in CentOS:
  New

Bug description:
  Configuration: OpenStack Networking, OpenvSwitch Plugin (GRE tunnels), 
OpenStack Networking Security Groups
  Release: Grizzly

  Sometime when creating instances, the dnsmasq instance associated with
  the tenant l2 network does not have configuration for the requesting
  mac address:

  Jun 11 09:30:23 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available
  Jun 11 09:30:33 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available

  Restarting the quantum-dhcp-agent resolved the issue:

  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45
  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: DHCPOFFER(tap98031044-d8) 
10.5.0.2 fa:16:3e:da:41:45

  The IP address (10.5.0.2) was re-cycled from an instance that was
  destroyed just prior to creation of this one.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.04
  Package: quantum-dhcp-agent 1:2013.1.1-0ubuntu1
  ProcVersionSignature: Ubuntu 3.8.0-23.34-generic 3.8.11
  Uname: Linux 3.8.0-23-generic x86_64
  ApportVersion: 2.9.2-0ubuntu8.1
  Architecture: amd64
  Date: Tue Jun 11 09:31:38 2013
  MarkForUpload: True
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: quantum
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.quantum.dhcp.agent.ini: [deleted]
  modified.conffile..etc.quantum.rootwrap.d.dhcp.filters: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1189909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1194350] Re: add number limit for external network

2015-10-14 Thread Armando Migliaccio
** No longer affects: neutron/havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1194350

Title:
  add number limit for external network

Status in neutron:
  Incomplete

Bug description:
  L3 agent will fail if we have multiple external networks in the system
  when we don't set the external network id in l3_agent.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1194350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506170] [NEW] test

2015-10-14 Thread Armando Migliaccio
Public bug reported:

test

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506170

Title:
  test

Status in neutron:
  New

Bug description:
  test

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506170/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506177] [NEW] vm_state 'soft-delete' should be 'soft_deleted'

2015-10-14 Thread Tony Dunbar
Public bug reported:

In

https://github.com/openstack/nova/blob/4cf6ef68199183697a0209751575f88fe5b2a733/nova/compute/vm_states.py#L40

the vm_state SOFT_DELETED is mapped to the value 'soft-delete.  All the
other vm_states are mapped to values that match their state in lower
case, so to be consistent SOFT_DELETED should map to 'soft_deleted'.

I searched through the Nova code and did not find any references to
soft-delete, so a change should not cause any breakage.

In Horizon's instance table here:

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/tables.py#L1001

the value expected is 'soft_deleted', so changing the value in nova to
be consistent should help.

** Affects: nova
 Importance: Undecided
 Assignee: Tony Dunbar (adunbar)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Tony Dunbar (adunbar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506177

Title:
  vm_state 'soft-delete' should be 'soft_deleted'

Status in OpenStack Compute (nova):
  New

Bug description:
  In

  
https://github.com/openstack/nova/blob/4cf6ef68199183697a0209751575f88fe5b2a733/nova/compute/vm_states.py#L40

  the vm_state SOFT_DELETED is mapped to the value 'soft-delete.  All
  the other vm_states are mapped to values that match their state in
  lower case, so to be consistent SOFT_DELETED should map to
  'soft_deleted'.

  I searched through the Nova code and did not find any references to
  soft-delete, so a change should not cause any breakage.

  In Horizon's instance table here:

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/tables.py#L1001

  the value expected is 'soft_deleted', so changing the value in nova to
  be consistent should help.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475834] Re: RFE - Provide HasTimestamp in db.models_v2 and use it in nets, subnets, ports, fips and securitygroups

2015-10-14 Thread Armando Migliaccio
I think this must revised in light of bug 1496802

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475834

Title:
  RFE - Provide HasTimestamp in db.models_v2 and use it in nets,
  subnets, ports, fips and securitygroups

Status in neutron:
  Won't Fix

Bug description:
  
  [Existing problem]

  
  Consumers and operators need to know when their resources created, include 
but not limited to "nets", "subnets", "ports", "floatingips" and 
"securitygroups".

  If we add them one by one, it's very inefficient.


  [What is the enhancement?]

  
  Provide class HasTimestamp in neutron.db.models_v2, and if one resource need 
to add timestamp, just inherit like HasID or HasTenant.


  [Issues with existing resources]

  
  Since we now can not get existing resources' created time, they will fill 
with none to Consumers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505218] Re: Image schema doesn't contain 'deactivated' status

2015-10-14 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505218

Title:
  Image schema doesn't contain 'deactivated' status

Status in Glance:
  Fix Released
Status in Glance kilo series:
  New

Bug description:
  Currently glance image schema doesn't contain 'deactivated' in the
  list of statuses, which leads to the fact that client cannot validate
  it .

  1. mfedosin@wdev:~$ glance image-list
  +--+-+
  | ID   | Name|
  +--+-+
  | 5cd380ce-a570-4270-b4d1-e328e6f49cb6 | cirros-0.3.4-x86_64-uec |
  | 9f430c9d-9649-4bc3-9ec9-1013e9c9da13 | cirros-0.3.4-x86_64-uec-kernel  |
  | e36a70f7-db13-4c3a-91a6-1308b74eebde | cirros-0.3.4-x86_64-uec-ramdisk |
  +--+-+

  2. mfedosin@wdev:~$ curl -H "X-Auth-Token:
  2c2e3bc5f0d541418a98deeabb27ac5e" -X POST
  
http://127.0.0.1:9292/v2/images/5cd380ce-a570-4270-b4d1-e328e6f49cb6/actions/deactivate

  3. mfedosin@wdev:~$ glance image-show
  5cd380ce-a570-4270-b4d1-e328e6f49cb6

  Expected result:
  There will be output with the image info

  Actual result:
  u'deactivated' is not one of [u'queued', u'saving', u'active', u'killed', 
u'deleted', u'pending_delete']

  Failed validating u'enum' in schema[u'properties'][u'status']:
  {u'description': u'Status of the image (READ-ONLY)',
   u'enum': [u'queued',
 u'saving',
 u'active',
 u'killed',
 u'deleted',
 u'pending_delete'],
   u'type': u'string'}

  On instance[u'status']:
  u'deactivated'

  Related to bug: https://bugs.launchpad.net/glance/+bug/1505134

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505930] [NEW] Fix key manager service endpoints in devstack Nova ephemeral

2015-10-14 Thread Max
Public bug reported:

Using a Devstack setup to configure Nova ephemeral with Barbican key
manager failed, when creating an instance.

Detailed description:

1. Version of Nova/Barbican we are using:

   - stable/Liberty

2. Relevant log files:

- n-api.log  Relevant section of the log file is attached with the
bug report.

3. Reproduce steps:
a) Install via devstack, with following nova post-configuration in 
local.conf (complete local.conf is attached)

   #...
   [[post-config|$NOVA_CONF]]
   [keymgr]
   api_class = nova.keymgr.barbican.BarbicanKeyManager

   [libvirt]
   images_type = lvm
   images_volume_group = stack-volumes-default

[ephemeral_storage_encryption]
cipher = aes-xts-plain64
enabled = True
   key_size = 256
   #...

 b) Launch an instance from Horizon or using the following command line:
 $ nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec 
--nic net-id=YOUR_NET_ID --security-group default  demo-instance1

 c) Launch instance failed with following error:
 ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach...
  (HTTP 500) 
(Request-ID: req-3be2a530-9df3-4e34-...)

4) Possible issue:
 -After analyzing the log files, the issue seems to be in nova key manager 
version discovery.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova api log file"
   https://bugs.launchpad.net/bugs/1505930/+attachment/4494150/+files/n-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505930

Title:
  Fix key manager service endpoints in devstack Nova ephemeral

Status in OpenStack Compute (nova):
  New

Bug description:
  Using a Devstack setup to configure Nova ephemeral with Barbican key
  manager failed, when creating an instance.

  Detailed description:

  1. Version of Nova/Barbican we are using:

     - stable/Liberty

  2. Relevant log files:

  - n-api.log  Relevant section of the log file is attached with the
  bug report.

  3. Reproduce steps:
  a) Install via devstack, with following nova post-configuration in 
local.conf (complete local.conf is attached)

     #...
     [[post-config|$NOVA_CONF]]
     [keymgr]
     api_class = nova.keymgr.barbican.BarbicanKeyManager

     [libvirt]
     images_type = lvm
     images_volume_group = stack-volumes-default

  [ephemeral_storage_encryption]
  cipher = aes-xts-plain64
  enabled = True
     key_size = 256
     #...

   b) Launch an instance from Horizon or using the following command line:
   $ nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec 
--nic net-id=YOUR_NET_ID --security-group default  demo-instance1

   c) Launch instance failed with following error:
   ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach...
    (HTTP 500) 
(Request-ID: req-3be2a530-9df3-4e34-...)

  4) Possible issue:
   -After analyzing the log files, the issue seems to be in nova key 
manager version discovery.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505935] [NEW] Missing table refresh after associating a Floating IP address

2015-10-14 Thread Christian Berendt
Public bug reported:

After associating a floating IP address to an instance using the
instances panel the panel/table is not refreshed. It is necessary to
manually reload the panel to see the assigned floating IP address in the
table.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505935

Title:
  Missing table refresh after associating a Floating IP address

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After associating a floating IP address to an instance using the
  instances panel the panel/table is not refreshed. It is necessary to
  manually reload the panel to see the assigned floating IP address in
  the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505934] [NEW] Add "Associate Floating IP address" to the "Launch instance" dialog

2015-10-14 Thread Christian Berendt
Public bug reported:

It would be nice to have the option to associate a floating IP address
with an instance when using the launch instance dialog. At the moment
this is not possible and has to be done in a 2nd step.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505934

Title:
  Add "Associate Floating IP address" to the "Launch instance" dialog

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It would be nice to have the option to associate a floating IP address
  with an instance when using the launch instance dialog. At the moment
  this is not possible and has to be done in a 2nd step.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505942] [NEW] when log in dashboard, error comes

2015-10-14 Thread Sun Jing
Public bug reported:

when login dashboard,after click "Sign in" button, Browser prompt error
message:CSRF 验证失败

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "CSRFerror.PNG"
   
https://bugs.launchpad.net/bugs/1505942/+attachment/4494205/+files/CSRFerror.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505942

Title:
  when log in dashboard,error comes

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  when login dashboard,after click "Sign in" button, Browser prompt
  error message:CSRF 验证失败

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505932] Re: neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors fails to create a load balancer

2015-10-14 Thread Ihar Hrachyshka
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505932

Title:
  
neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors
  fails to create a load balancer

Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  New

Bug description:
  http://logs.openstack.org/43/234343/1/check/gate-neutron-lbaasv2-dsvm-
  minimal/b767acd/logs/testr_results.html.gz

  ft1.1: setUpClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 103, in setUpClass
  super(BaseTestCase, cls).setUpClass()
File "neutron_lbaas/tests/tempest/lib/test.py", line 272, in setUpClass
  six.reraise(etype, value, trace)
File "neutron_lbaas/tests/tempest/lib/test.py", line 265, in setUpClass
  cls.resource_setup()
File 
"neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py", line 
45, in resource_setup
  vip_subnet_id=cls.subnet.get('id'))
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 120, in 
_create_load_balancer
  raise Exception(_("Failed to create load balancer..."))
  Exception: Failed to create load balancer...

  No exceptions in server log, probably test issue. The test hides the
  actual error from us, so all we have right now is that 'Failed to
  create load balancer...' message that is too vague to understand what
  failed. The test should not hide details on failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505975] [NEW] Missing British English l10n

2015-10-14 Thread Rob Cresswell
Public bug reported:

British English localisation was removed during Liberty

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505975

Title:
  Missing British English l10n

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  British English localisation was removed during Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505967] [NEW] AttributeError: 'APIRouterV21' object has no attribute '_router'

2015-10-14 Thread Pavel Gluschak
Public bug reported:

$ rpm -qa | grep nova
openstack-nova-novncproxy-2015.1.0-3.el7.noarch
python-nova-2015.1.0-3.el7.noarch
openstack-nova-common-2015.1.0-3.el7.noarch
openstack-nova-api-2015.1.0-3.el7.noarch
openstack-nova-console-2015.1.0-3.el7.noarch
openstack-nova-scheduler-2015.1.0-3.el7.noarch
openstack-nova-conductor-2015.1.0-3.el7.noarch
python-novaclient-2.23.0-1.el7.noarch
openstack-nova-cert-2015.1.0-3.el7.noarch

I was trying to list servers using both Nova V2 and V2.1 API.

It works fine, when using v2 endpoint:
$ curl -H 'content-type: application/json' -H "X-auth-token: 
4cd9bd90bdb34c58870bfebbb3510cd9" 
http://127.0.0.1:8774/v2/c7d0b84341d74f10a10aa3b999727e26/servers
{"servers": []}

But the same request fails on v2.1 endpoint:
$ curl -H 'content-type: application/json' -H "X-auth-token: 
4cd9bd90bdb34c58870bfebbb3510cd9" 
http://127.0.0.1:8774/v2.1/c7d0b84341d74f10a10aa3b999727e26/servers
{"computeFault": {"message": "The server has either erred or is incapable of 
performing the requested operation.", "code": 500}}

2015-10-14 12:01:23.279 3087 ERROR nova.api.openstack 
[req-8d9e5fa2-721b-4da9-b36d-a5daf5ec29b9 b228318c4b434e20a93d5e2c40e0425d 
c7d0b84341d74f10a10aa3b999727e26 - - -] Caught error: 'APIRouterV21' object has 
no attribute '_router'
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack Traceback (most recent 
call last):
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 634, in __call__
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack return 
self._call_app(env, start_response)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 554, in _call_app
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack return self._app(env, 
_fake_start_response)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/wsgi.py", line 482, in __call__
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack return self._router
2015-10-14 12:01:23.279 3087 TRACE nova.api.openstack AttributeError: 
'APIRouterV21' object has no attribute '_router'

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: v2.1

** Tags added: v2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505967

Title:
  AttributeError: 'APIRouterV21' object has no attribute '_router'

Status in OpenStack Compute (nova):
  New

Bug description:
  $ rpm -qa | grep nova
  openstack-nova-novncproxy-2015.1.0-3.el7.noarch
  python-nova-2015.1.0-3.el7.noarch
  openstack-nova-common-2015.1.0-3.el7.noarch
  openstack-nova-api-2015.1.0-3.el7.noarch
  openstack-nova-console-2015.1.0-3.el7.noarch
  openstack-nova-scheduler-2015.1.0-3.el7.noarch
  openstack-nova-conductor-2015.1.0-3.el7.noarch
  python-novaclient-2.23.0-1.el7.noarch
  openstack-nova-cert-2015.1.0-3.el7.noarch

  I was trying to list servers using both Nova V2 and V2.1 API.

  It works fine, when using v2 endpoint:
  $ curl -H 'content-type: application/json' -H "X-auth-token: 
4cd9bd90bdb34c58870bfebbb3510cd9" 

[Yahoo-eng-team] [Bug 1415588] Re: Cannot list users and groups with Keystone v3

2015-10-14 Thread Pavel Gluschak
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415588

Title:
  Cannot list users and groups with Keystone v3

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Openstack 2014.2.1 on CentOS 7

  Horizon is configured to use Keystone v3 API w/ domains:
  OPENSTACK_API_VERSIONS = {
  "identity": 3
  }
  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

  I logged in as admin and specified "default" domain on login page.
  When I select Identity->Users or Identity->Groups I got pop-up error
  message saying "Error: Unauthorized: Unable to retrieve user/group
  list."

  In keystone.log I see:
  2015-01-28 21:39:51.654 4207 WARNING keystone.common.controller [-] No domain 
information specified as part of list request
  2015-01-28 21:39:51.654 4207 WARNING keystone.common.wsgi [-] Authorization 
failed. The request you have made requires authentication. from 9.167.185.90
  2015-01-28 21:39:51.655 4207 INFO eventlet.wsgi.server [-] 9.167.185.90 - - 
[28/Jan/2015 21:39:51] "GET /v3/users HTTP/1.1" 401 313 0.008419
  2015-01-28 21:39:54.031 4243 WARNING keystone.common.controller [-] No domain 
information specified as part of list request
  2015-01-28 21:39:54.031 4243 WARNING keystone.common.wsgi [-] Authorization 
failed. The request you have made requires authentication. from 9.167.185.90
  2015-01-28 21:39:54.032 4243 INFO eventlet.wsgi.server [-] 9.167.185.90 - - 
[28/Jan/2015 21:39:54] "GET /v3/groups HTTP/1.1" 401 313 0.009917

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504053] Re: Fixtures 1.4.0 makes Py34 unit tests fail

2015-10-14 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504053

Title:
  Fixtures 1.4.0 makes Py34 unit tests fail

Status in neutron:
  Fix Released

Bug description:
  See the output of http://logs.openstack.org/97/231897/2/check/gate-
  neutron-python34/57063ea/console.html for complete tracebacks.

  This can be fixed by using fixtures 1.3.1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505438] Re: test_restart_wsgi_on_sighup_multiple_workers failing with Timed out RuntimeError

2015-10-14 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505438

Title:
  test_restart_wsgi_on_sighup_multiple_workers failing with Timed out
  RuntimeError

Status in neutron:
  Fix Released

Bug description:
  message:"in _test_restart_service_on_sighup" AND build_name:"gate-
  neutron-dsvm-functional"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gX3Rlc3RfcmVzdGFydF9zZXJ2aWNlX29uX3NpZ2h1cFwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1uZXV0cm9uLWRzdm0tZnVuY3Rpb25hbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDQ0NjkzMTc0ODgzLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503890] Re: test_policy assumes oslo.policy internal implementationd details

2015-10-14 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503890

Title:
  test_policy assumes oslo.policy internal implementationd details

Status in neutron:
  Fix Released
Status in oslo.policy:
  Fix Released

Bug description:
  Neutron assumes that oslo.policy uses urlrequest.urlopen()
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/unit/test_policy.py#n108
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/unit/test_policy.py#n121

  Unfortunately this assumption is bad as oslo.policy is now using 
requests.post()
  https://review.openstack.org/#/c/226122/

  So these 2 tests will fail when we release next oslo.policy version
  for Mitaka

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505645] Re: neutron/tests/functional/test_server.py does not work for oslo.service < 0.10.0

2015-10-14 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505645

Title:
  neutron/tests/functional/test_server.py does not work for oslo.service
  < 0.10.0

Status in neutron:
  Fix Released

Bug description:
  Since https://review.openstack.org/#/c/233893/ was merged, the test
  fails if oslo.service < 0.10.0 is installed, as follows:

  Traceback (most recent call last):
File "neutron/tests/functional/test_server.py", line 286, in test_start
  workers=len(workers))
File "neutron/tests/functional/test_server.py", line 151, in 
_test_restart_service_on_sighup
  'size': expected_size}))
File "neutron/agent/linux/utils.py", line 346, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 
34, in sleep
  hub.switch()
File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, 
in switch
  return self.greenlet.switch()
  RuntimeError: Timed out waiting for file 
/tmp/tmp517j7P/tmpQwQXvn/test_server.tmp to be created and its size become 
equal to 5.
  

  
neutron.tests.functional.test_server.TestRPCServer.test_restart_rpc_on_sighup_multiple_workers
  
--

  Captured pythonlogging:
  ~~~
  2015-10-13 13:28:55,848  WARNING [oslo_config.cfg] Option "verbose" from 
group "DEFAULT" is deprecated for removal.  Its value may be silently ignored 
in the future.
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/functional/test_server.py", line 248, in 
test_restart_rpc_on_sighup_multiple_workers
  workers=2)
File "neutron/tests/functional/test_server.py", line 151, in 
_test_restart_service_on_sighup
  'size': expected_size}))
File "neutron/agent/linux/utils.py", line 346, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 
34, in sleep
  hub.switch()
File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, 
in switch
  return self.greenlet.switch()
  RuntimeError: Timed out waiting for file 
/tmp/tmpoW1HXA/tmpBDI82O/test_server.tmp to be created and its size become 
equal to 5.
  

  
neutron.tests.functional.test_server.TestWsgiServer.test_restart_wsgi_on_sighup_multiple_workers
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/functional/test_server.py", line 211, in 
test_restart_wsgi_on_sighup_multiple_workers
  workers=2)
File "neutron/tests/functional/test_server.py", line 151, in 
_test_restart_service_on_sighup
  'size': expected_size}))
File "neutron/agent/linux/utils.py", line 346, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 
34, in sleep
  hub.switch()
File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, 
in switch
  return self.greenlet.switch()
  RuntimeError: Timed out waiting for file 
/tmp/tmpwKs2ON/tmp6VhW3q/test_server.tmp to be created and its size become 
equal to 5.

  Note that minimal oslo.service version in master is 0.7.0. There is a
  patch to bump the version in openstack/requirements:
  https://review.openstack.org/#/c/234026/

  Anyway, we still need to fix the test to work with any version of
  oslo.service, because the fix is needed for Liberty branch where we
  cannot bump the version of the library, as we can do in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505932] [NEW] neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors fails to create a load balancer

2015-10-14 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/43/234343/1/check/gate-neutron-lbaasv2-dsvm-
minimal/b767acd/logs/testr_results.html.gz

ft1.1: setUpClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
  File "neutron_lbaas/tests/tempest/v2/api/base.py", line 103, in setUpClass
super(BaseTestCase, cls).setUpClass()
  File "neutron_lbaas/tests/tempest/lib/test.py", line 272, in setUpClass
six.reraise(etype, value, trace)
  File "neutron_lbaas/tests/tempest/lib/test.py", line 265, in setUpClass
cls.resource_setup()
  File "neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py", 
line 45, in resource_setup
vip_subnet_id=cls.subnet.get('id'))
  File "neutron_lbaas/tests/tempest/v2/api/base.py", line 120, in 
_create_load_balancer
raise Exception(_("Failed to create load balancer..."))
Exception: Failed to create load balancer...

No exceptions in server log, probably test issue. The test hides the
actual error from us, so all we have right now is that 'Failed to create
load balancer...' message that is too vague to understand what failed.
The test should not hide details on failures.

** Affects: neutron
 Importance: Critical
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: gate-failure lbaas

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: New => Confirmed

** Tags added: gate-failure lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505932

Title:
  
neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors
  fails to create a load balancer

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/43/234343/1/check/gate-neutron-lbaasv2-dsvm-
  minimal/b767acd/logs/testr_results.html.gz

  ft1.1: setUpClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 103, in setUpClass
  super(BaseTestCase, cls).setUpClass()
File "neutron_lbaas/tests/tempest/lib/test.py", line 272, in setUpClass
  six.reraise(etype, value, trace)
File "neutron_lbaas/tests/tempest/lib/test.py", line 265, in setUpClass
  cls.resource_setup()
File 
"neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py", line 
45, in resource_setup
  vip_subnet_id=cls.subnet.get('id'))
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 120, in 
_create_load_balancer
  raise Exception(_("Failed to create load balancer..."))
  Exception: Failed to create load balancer...

  No exceptions in server log, probably test issue. The test hides the
  actual error from us, so all we have right now is that 'Failed to
  create load balancer...' message that is too vague to understand what
  failed. The test should not hide details on failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505932] Re: neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors fails to create a load balancer

2015-10-14 Thread Ihar Hrachyshka
Logstash for nova-cpu failure on qemu-img:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ29tbWFuZDogZW52IExDX0FMTD1DIExBTkc9QyBxZW11LWltZyBpbmZvXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDQ4MTQ3ODIzMTJ9

I believe it suggests something occurred around Oct 13 that it resulted
in spike.

** Also affects: octavia
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505932

Title:
  
neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors
  fails to create a load balancer

Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Confirmed
Status in octavia:
  New

Bug description:
  http://logs.openstack.org/43/234343/1/check/gate-neutron-lbaasv2-dsvm-
  minimal/b767acd/logs/testr_results.html.gz

  ft1.1: setUpClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 103, in setUpClass
  super(BaseTestCase, cls).setUpClass()
File "neutron_lbaas/tests/tempest/lib/test.py", line 272, in setUpClass
  six.reraise(etype, value, trace)
File "neutron_lbaas/tests/tempest/lib/test.py", line 265, in setUpClass
  cls.resource_setup()
File 
"neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py", line 
45, in resource_setup
  vip_subnet_id=cls.subnet.get('id'))
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 120, in 
_create_load_balancer
  raise Exception(_("Failed to create load balancer..."))
  Exception: Failed to create load balancer...

  No exceptions in server log, probably test issue. The test hides the
  actual error from us, so all we have right now is that 'Failed to
  create load balancer...' message that is too vague to understand what
  failed. The test should not hide details on failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505972] [NEW] endpoint policy's resource_relation is not built corretly

2015-10-14 Thread Dave Chen
Public bug reported:

Since endpoint_policy has been moved into keystone core, so should
update the resource relation to build the correct url accordingly.

see:
https://github.com/openstack/keystone/blob/master/keystone/endpoint_policy/routers.py

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1505972

Title:
  endpoint policy's resource_relation is not built corretly

Status in Keystone:
  In Progress

Bug description:
  Since endpoint_policy has been moved into keystone core, so should
  update the resource relation to build the correct url accordingly.

  see:
  
https://github.com/openstack/keystone/blob/master/keystone/endpoint_policy/routers.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503847] Re: test_migration fails with type error

2015-10-14 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503847

Title:
  test_migration fails with type error

Status in neutron:
  Fix Released

Bug description:
  I am seeing "gate-neutron-python34" test failures again in neutron.

  http://logs.openstack.org/82/228582/13/check/gate-neutron-
  python34/5b36c34/console.html

  http://logs.openstack.org/82/228582/13/check/gate-neutron-
  python34/5b36c34/console.html#_2015-10-07_17_36_06_987

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2015-10-14 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Committed
Status in Ironic:
  Fix Committed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Committed

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505976] [NEW] containers loading image location hardcoded

2015-10-14 Thread Matthias Runge
Public bug reported:

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/containers/tables.py#L36

loading gif location hardcoded, and does not respect webroot setting

** Affects: horizon
 Importance: Low
 Assignee: Matthias Runge (mrunge)
 Status: In Progress


** Tags: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505976

Title:
  containers loading image location hardcoded

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/containers/tables.py#L36

  loading gif location hardcoded, and does not respect webroot setting

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506092] Re: Count all network-agent bindings during scheduling

2015-10-14 Thread Eugene Nikanorov
*** This bug is a duplicate of bug 1388698 ***
https://bugs.launchpad.net/bugs/1388698

** This bug has been marked a duplicate of bug 1388698
   dhcp_agents_per_network does not work appropriately.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506092

Title:
  Count all network-agent bindings during scheduling

Status in neutron:
  In Progress

Bug description:
  Currently the code in DHCP agent scheduler counts only active agents that 
host network.
  In such case it may allow more agents to host the network than it is 
configured.

  This is creates possibility of race condition when several DHCP agents start 
up at the same time and try to get active networks.
  The network gets hosted by several agents eventhough it might already be 
hosted by other agents.
  This just wastes ports/fixed ips from tenant's network range and increases 
load on controllers.

  It's better to let rescheduling mechanism to sort out active/dead
  agents for each of networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0

2015-10-14 Thread Brant Knudson
Marking invalid for keystone since it wasn't actually a bug in keystone.

This was corrected by the release of keystonemiddleware and python-
keystoneclient, unfortunately the release caused a new bug since it
included capping WebOb which wasn't supposed to be in a release, so now
there's new releases. That's bug 1505996 .

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

** Changed in: keystone
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1505326

Title:
  Unit tests failing with requests 2.8.0

Status in Keystone:
  Invalid
Status in openstack-ansible:
  In Progress

Bug description:
  
  When the tests are run, a bunch of them fail:

  pkg_resources.ContextualVersionConflict: (requests 2.8.0
  (/home/jenkins/workspace/gate-keystone-
  python27/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy']))

  global-requirements has requests!=2.8.0 , but something must be
  pulling in that version of requests!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506211] [NEW] clicking on already selected angular panel shows a spinner indefinitely

2015-10-14 Thread Kristine
Public bug reported:

Work has begun on this bug here:

https://review.openstack.org/#/c/230187/

This bug occurs in all new and future angular panels.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506211

Title:
  clicking on already selected angular panel shows a spinner
  indefinitely

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Work has begun on this bug here:

  https://review.openstack.org/#/c/230187/

  This bug occurs in all new and future angular panels.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506213] [NEW] nova.cmd.baseproxy handles errors incorrectly

2015-10-14 Thread Alexander Aleksiyants
Public bug reported:

Nova from master.

The module doesn't print error message. If an error occurs in
nova.cmd.baseproxy the method exit_with_error is executed that looks as
follows:

def exit_with_error(msg, errno=-1):
print(msg) and sys.exit(errno)

So in python 2.7 this method terminates the application without printing
anything (unable to flush on time) and in python 3.4 it does strange
things because print() returns None.

I noticed this bug when I was trying to run nova-novncproxy without
novnc istalled. nova-novncproxy was being terminated without any prints.
Then I debugged it and found out that it tries to send an error message
but it fails.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506213

Title:
  nova.cmd.baseproxy handles errors incorrectly

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova from master.

  The module doesn't print error message. If an error occurs in
  nova.cmd.baseproxy the method exit_with_error is executed that looks
  as follows:

  def exit_with_error(msg, errno=-1):
  print(msg) and sys.exit(errno)

  So in python 2.7 this method terminates the application without
  printing anything (unable to flush on time) and in python 3.4 it does
  strange things because print() returns None.

  I noticed this bug when I was trying to run nova-novncproxy without
  novnc istalled. nova-novncproxy was being terminated without any
  prints. Then I debugged it and found out that it tries to send an
  error message but it fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-10-14 Thread Travis Tripp
** Changed in: searchlight
   Status: Fix Committed => Fix Released

** Changed in: searchlight
Milestone: liberty-rc1 => 0.1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in Keystone:
  Fix Released
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Released
Status in murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Trove:
  Fix Released

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478690] Re: Request ID has a double req- at the start

2015-10-14 Thread Travis Tripp
** Changed in: searchlight
   Status: Fix Committed => Fix Released

** Changed in: searchlight
Milestone: liberty-rc1 => 0.1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1478690

Title:
  Request ID has a double req- at the start

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released

Bug description:
  ➜  vagrant git:(master) http http://192.168.121.242:9393/v1/search 
X-Auth-Token:$token query:='{"match_all" : {}}'
  HTTP/1.1 200 OK
  Content-Length: 138
  Content-Type: application/json; charset=UTF-8
  Date: Mon, 27 Jul 2015 20:21:31 GMT
  X-Openstack-Request-Id: req-req-0314bf5b-9c04-4bed-bf86-d2e76d297a34

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1478690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384228] Re: Exception should be raised if nova has failed to add fixed ip

2015-10-14 Thread Matt Riedemann
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384228

Title:
  Exception should be raised if nova has failed to add fixed ip

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I have the following code

  try:
  instance.add_fixed_ip(network.id)
  except Exception as e:
  ...

  The issue is that in Nova compute logs I can see exception, but it is not 
thrown on client side.
  Since add_fixed_ip() returns nothing, it is difficult to find if fixed 
address was assigned.

  Exception:
  """
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 393, in decorated_function
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py",
 line 88, in wrapped
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py",
 line 71, in wrapped
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 274, in decorated_function
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher pass
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 260, in decorated_function
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 303, in decorated_function
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 290, in decorated_function
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 3621, in add_fixed_ip_to_instance
  2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1505677] Re: oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-conductor log

2015-10-14 Thread Markus Zoeller (markus_z)
For Nova, this is a "Liberty" release only bug which is fixed with
https://review.openstack.org/234166. This got merged in RC3 of
"Liberty".

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505677

Title:
  oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-
  conductor log

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-ansible:
  Confirmed
Status in oslo.versionedobjects:
  New

Bug description:
  In nova-conductor we're seeing the following error for stable/liberty:

  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, 
objname, objmethod, object_versions, args, kwargs)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 477, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher if 
isinstance(result, nova_object.NovaObject) else result)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
535, in obj_to_primitive
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
version_manifest)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
507, in obj_make_compatible_from_manifest
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher return 
self.obj_make_compatible(primitive, target_version)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/instance.py", line 1325, 
in obj_make_compatible
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
target_version)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/base.py", line 262, in 
obj_make_compatible
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
rel_versions = self.obj_relationships['objects']
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher KeyError: 
'objects'
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher

  More details here:
  
http://logs.openstack.org/56/233756/8/check/gate-openstack-ansible-dsvm-commit/879f745/logs/aio1_nova_conductor_container-5ec67682/nova-conductor.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506003] [NEW] openstack cannot add vlan for VF with ixgbe 4.1.2 automatically

2015-10-14 Thread shuaipem
Public bug reported:

network vlan is 630, use openstack(kilo) to create VM,  when use the ixgbe 
driver 4.0.1-k-rh7.1,
 then I can see the VF port, and VM can access the public network,
vf 0 MAC fa:16:3e:a8:82:dd, vlan 630, spoof checking on, 
link-state auto

but when change driver to 4.1.2, same operation, can't find the vlan,  VM can't 
access the public network
vf 12 MAC fa:16:3e:f5:2c:b8, spoof checking on, link-state 
auto

if I use the "ip link set eth0 vf 12 vlan 630",  it can work.

we need use the driver 4.1.2, so how to let openstack add vlan
automatically?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506003

Title:
  openstack cannot add vlan for VF with ixgbe 4.1.2  automatically

Status in neutron:
  New

Bug description:
  network vlan is 630, use openstack(kilo) to create VM,  when use the ixgbe 
driver 4.0.1-k-rh7.1,
   then I can see the VF port, and VM can access the public network,
  vf 0 MAC fa:16:3e:a8:82:dd, vlan 630, spoof checking on, 
link-state auto

  but when change driver to 4.1.2, same operation, can't find the vlan,  VM 
can't access the public network
  vf 12 MAC fa:16:3e:f5:2c:b8, spoof checking on, 
link-state auto

  if I use the "ip link set eth0 vf 12 vlan 630",  it can work.

  we need use the driver 4.1.2, so how to let openstack add vlan
  automatically?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506012] [NEW] Gate tests randomly failing qemu-img info calls

2015-10-14 Thread Daniel Berrange
Public bug reported:

Since yesterday we see a spike in qemu-img info calls failing:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ29tbWFuZDogZW52IExDX0FMTD1DIExBTkc9QyBxZW11LWltZyBpbmZvXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDQ4MTQ3ODIzMTJ9

eg

[req-20a22603-59e7-425d-ba06-2a9a5505b5
6b tempest-ServersAdminTestJSON-917741884 
tempest-ServersAdminTestJSON-747301663] [instance: 
bebbcf1a-104f-4d98-a02b-ccd20601890b] Terminating instance
2015-10-13 17:52:40.436 ERROR nova.compute.manager 
[req-ef37b368-b686-42fd-bbfe-b6c3a69d063f tempest-ServersTestJSON-1340739681 
tempest-ServersTestJSON-1992588300] [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] Instance failed to spawn
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] Traceback (most recent call last):
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2172, in _build_resources
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] yield resources
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2019, in 
_build_and_run_instance
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] block_device_info=block_device_info)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2437, in spawn
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] admin_pass=admin_password)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2861, in _create_image
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] instance, size, fallback_from_host)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 6282, in 
_try_fetch_image_cache
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] size=size)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/imagebackend.py", line 249, in cache
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] *args, **kwargs)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/imagebackend.py", line 567, in 
create_image
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] copy_qcow2_image(base, self.path, 
size)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
254, in inner
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] return f(*args, **kwargs)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/imagebackend.py", line 529, in 
copy_qcow2_image
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] libvirt_utils.create_cow_image(base, 
target)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/utils.py", line 87, in create_cow_image
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] base_details = 
images.qemu_img_info(backing_file)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/virt/images.py", line 68, in qemu_img_info
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] preexec_fn=_qemu_resources)
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d]   File 
"/opt/stack/new/nova/nova/utils.py", line 390, in execute
2015-10-13 17:52:40.436 23002 ERROR nova.compute.manager [instance: 
94a3b99d-b8b2-4e2a-a26a-fd7ff6e4f67d] return processutils.execute(*cmd, 
**kwargs)

[Yahoo-eng-team] [Bug 1505932] Re: neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors fails to create a load balancer

2015-10-14 Thread Ihar Hrachyshka
Nova bug is now: https://bugs.launchpad.net/nova/+bug/1506012

** Changed in: nova
   Status: In Progress => Invalid

** No longer affects: octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505932

Title:
  
neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors
  fails to create a load balancer

Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/43/234343/1/check/gate-neutron-lbaasv2-dsvm-
  minimal/b767acd/logs/testr_results.html.gz

  ft1.1: setUpClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 103, in setUpClass
  super(BaseTestCase, cls).setUpClass()
File "neutron_lbaas/tests/tempest/lib/test.py", line 272, in setUpClass
  six.reraise(etype, value, trace)
File "neutron_lbaas/tests/tempest/lib/test.py", line 265, in setUpClass
  cls.resource_setup()
File 
"neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py", line 
45, in resource_setup
  vip_subnet_id=cls.subnet.get('id'))
File "neutron_lbaas/tests/tempest/v2/api/base.py", line 120, in 
_create_load_balancer
  raise Exception(_("Failed to create load balancer..."))
  Exception: Failed to create load balancer...

  No exceptions in server log, probably test issue. The test hides the
  actual error from us, so all we have right now is that 'Failed to
  create load balancer...' message that is too vague to understand what
  failed. The test should not hide details on failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505972] Re: endpoint policy's resource_relation is not built corretly

2015-10-14 Thread Dave Chen
** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1505972

Title:
  endpoint policy's resource_relation is not built corretly

Status in Keystone:
  Invalid

Bug description:
  Since endpoint_policy has been moved into keystone core, so should
  update the resource relation to build the correct url accordingly.

  see:
  
https://github.com/openstack/keystone/blob/master/keystone/endpoint_policy/routers.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506234] [NEW] Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

2015-10-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

To give some context, calling destroy [5] was added as a bug fix [1]. It
was required back then because, Nova compute was not calling destroy on
catching the exception [2]. But now, Nova compute catches all exceptions
that happen during spawn and calls destroy (_shutdown_instance) [3]

Since Nova compute is already taking care of destroying the instance
before rescheduling, we shouldn't have to call destroy separately in the
driver. I confirmed in logs that destroy gets called twice if there is
any failure during _wait_for_active() [4] or timeout happens [5]


[1] https://review.openstack.org/#/c/99519/
[2] 
https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
[3] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
[4] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
[5] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic
-- 
Ironic virt driver in Nova calls destroy unnecessarily if spawn fails
https://bugs.launchpad.net/bugs/1506234
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506242] [NEW] If instance spawn fails and shutdown_instance also fails, a new excpetion is raised, masking original spawn failure

2015-10-14 Thread Shraddha Pandhe
Public bug reported:

When nova-compute, when building and running the instance, calls spawn
on virt driver, spawn can fail for several reasons.

e.g. For Ironic, the spawn call can fail if deploy callback timeout
happens.

If this call fails, nova-compute catches the exception, saves it for re-
raising and calls shutdown_instance in a try-except block [1]. The
problem is, if this shutdown_instance call also fails, a new exception
'BuildAbortException' is raised. This masks the original spawn failure.

This can cause problems for Ironic where, if deployment failed due to
timeout, there is a good chance that shutdown_instance will also fail
due to same reason, since it involves zapping etc. So original
deployment failure will not be propagated back as instance fault.


[1] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506242

Title:
  If instance spawn fails and shutdown_instance also fails, a new
  excpetion is raised, masking original spawn failure

Status in OpenStack Compute (nova):
  New

Bug description:
  When nova-compute, when building and running the instance, calls spawn
  on virt driver, spawn can fail for several reasons.

  e.g. For Ironic, the spawn call can fail if deploy callback timeout
  happens.

  If this call fails, nova-compute catches the exception, saves it for
  re-raising and calls shutdown_instance in a try-except block [1]. The
  problem is, if this shutdown_instance call also fails, a new exception
  'BuildAbortException' is raised. This masks the original spawn
  failure.

  This can cause problems for Ironic where, if deployment failed due to
  timeout, there is a good chance that shutdown_instance will also fail
  due to same reason, since it involves zapping etc. So original
  deployment failure will not be propagated back as instance fault.

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506244] [NEW] support SSH key value over fingerprint for Azure

2015-10-14 Thread Ben Howard
Public bug reported:

Azure is changing the ovf-env.xml file. Instead of passing a fingerprint
to the key and obtaining it separately, the SSH public key itself is
passed via a new "" parameters:


  

  
EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62
  $HOME/UserName/.ssh/authorized_keys
  ssh-rsa NOTAREALKEY== foo@bar.local

  

** Affects: cloud-init
 Importance: Undecided
 Assignee: Ben Howard (utlemming)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) => Ben Howard (utlemming)

** Branch linked: lp:~utlemming/cloud-init/lp1506244-ssh-key-value

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1506244

Title:
  support SSH key value over fingerprint for Azure

Status in cloud-init:
  New

Bug description:
  Azure is changing the ovf-env.xml file. Instead of passing a
  fingerprint to the key and obtaining it separately, the SSH public key
  itself is passed via a new "" parameters:

  

  

EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62
$HOME/UserName/.ssh/authorized_keys
ssh-rsa NOTAREALKEY== foo@bar.local
  


To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1506244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506036] [NEW] Horizon only supports a 'volumev2' endpoint

2015-10-14 Thread Rob Cresswell
Public bug reported:

Horizon currently searches for a 'volumev2' endpoint for Cinder,
following this patch: https://review.openstack.org/#/c/151081

We may want to use 'volume' too. However, this may cause confusion (is
'volume' v1, or v2?) and needs discussion.

** Affects: horizon
 Importance: High
 Status: New

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506036

Title:
  Horizon only supports a 'volumev2' endpoint

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon currently searches for a 'volumev2' endpoint for Cinder,
  following this patch: https://review.openstack.org/#/c/151081

  We may want to use 'volume' too. However, this may cause confusion (is
  'volume' v1, or v2?) and needs discussion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506021] [NEW] AsyncProcess.stop() can lead to deadlock

2015-10-14 Thread John Schwarz
Public bug reported:

The bug occurs when calling stop() on an AsyncProcess instance which is
running a progress generating substantial amounts of output to
stdout/stderr and that has a signal handler for some signal (SIGTERM for
example) that causes the program to exit gracefully.

Linux Pipes 101: when calling write() to some one-way pipe, if the pipe
is full of data [1], write() will block until the other end read()s from
the pipe.

AsyncProcess is using eventlet.green.subprocess to create an eventlet-
safe subprocess, using stdout=subprocess.PIPE and
stderr=subprocess.PIPE. In other words, stdout and stderr are redirected
to a one-way linux pipe to the executing AsyncProcess. When stopping the
subprocess, the current code [2] first kills the readers used to empty
stdout/stderr and only then sends the signal.

It is clear that if SIGTERM is sent to the subprocess, and if the
subprocess is generating a lot of output to stdout/stderr AFTER the
readers were killed, a deadlock is achieved: the parent process is
blocking on wait() and the subprocess is blocking on write() (waiting
for someone to read and empty the pipe).

This can be avoided by sending SIGKILL to the AsyncProcesses (this is
the code's default), but other signals such as SIGTERM, that can be
handled by the userspace code to cause the process to exit gracefully,
might trigger this deadlock. For example, I ran into this while trying
to modify existing fullstack tests to SIGTERM processes instead of
SIGKILL them, and the ovs agent got deadlocked a lot.

[1]: http://linux.die.net/man/7/pipe (Section called "Pipe capacity")
[2]: 
https://github.com/openstack/neutron/blob/stable/liberty/neutron/agent/linux/async_process.py#L163

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => John Schwarz (jschwarz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506021

Title:
  AsyncProcess.stop() can lead to deadlock

Status in neutron:
  New

Bug description:
  The bug occurs when calling stop() on an AsyncProcess instance which
  is running a progress generating substantial amounts of output to
  stdout/stderr and that has a signal handler for some signal (SIGTERM
  for example) that causes the program to exit gracefully.

  Linux Pipes 101: when calling write() to some one-way pipe, if the
  pipe is full of data [1], write() will block until the other end
  read()s from the pipe.

  AsyncProcess is using eventlet.green.subprocess to create an eventlet-
  safe subprocess, using stdout=subprocess.PIPE and
  stderr=subprocess.PIPE. In other words, stdout and stderr are
  redirected to a one-way linux pipe to the executing AsyncProcess. When
  stopping the subprocess, the current code [2] first kills the readers
  used to empty stdout/stderr and only then sends the signal.

  It is clear that if SIGTERM is sent to the subprocess, and if the
  subprocess is generating a lot of output to stdout/stderr AFTER the
  readers were killed, a deadlock is achieved: the parent process is
  blocking on wait() and the subprocess is blocking on write() (waiting
  for someone to read and empty the pipe).

  This can be avoided by sending SIGKILL to the AsyncProcesses (this is
  the code's default), but other signals such as SIGTERM, that can be
  handled by the userspace code to cause the process to exit gracefully,
  might trigger this deadlock. For example, I ran into this while trying
  to modify existing fullstack tests to SIGTERM processes instead of
  SIGKILL them, and the ovs agent got deadlocked a lot.

  [1]: http://linux.die.net/man/7/pipe (Section called "Pipe capacity")
  [2]: 
https://github.com/openstack/neutron/blob/stable/liberty/neutron/agent/linux/async_process.py#L163

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426305] Re: metadata server not reachable with 169.254.0.0/16 route

2015-10-14 Thread Pranav Salunke
This is an RFC voilation. This should be addressed by the Operating
System images rather than worked around by Neutron. There should be no
such existing routes in the image(s).

** Changed in: neutron
 Assignee: Pranav Salunke (dguitarbite) => (unassigned)

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426305

Title:
   metadata server not reachable with 169.254.0.0/16 route

Status in neutron:
  Invalid

Bug description:
  With openvswitch (tested both vlan and gre)
  metadata server is not reachable 
  when a 169.254.0.0/16 local route is present in a cloud VM
  (as is the default with some OS versions)

  Steps To Reproduce:
  in a cloud VM do
  ip r add 169.254.0.0/16 dev eth0
  curl 169.254.169.254

  Without this route, metadata traffic is routed through the default gw
  and thus working fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506062] [NEW] Create/Update Domain config with LDAP requires validation for User Bind Distinguished Name, User Tree Distinguished Name, Group Tree Distinguished Name

2015-10-14 Thread Prashant
Public bug reported:

Validation is required for the fields - user_tree_dn( User Tree
Distinguished Name), group_tree_dn(Group Tree Distinguished Name ), user
(User Bind Distinguished Name) for both create and update domain config
APIs. Currently the following issues occur:

1. If the user ("user bind name") contains invalid characters, then connection 
to the LDAP server for any of the operations fails.
2. If the user_tree_dn contains invalid characters, then any operation on users 
for the LDAP server fails. eg. list all users
3.  If the group_tree_dn contains invalid characters, then any operation on 
groups for the LDAP server fails. eg. list all groups


We believe that there should be a check on these 3 attribute values for invalid 
characters for the following APIs:

1. Create Domain config 
({{url}}/v3/domains/02ce011944aa4021b576c01e3c423d9f/config, PUT)
2. Update Domain config 
({{url}}/v3/domains/02ce011944aa4021b576c01e3c423d9f/config, PATCH)


The current API returns success even when these attribute values contain 
invalid characters from an LDAP perspective.

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- Create IDP with LDAP requires validation for UDN,User Bind Distinguished 
Name, User Tree Distinguished Name,Group Tree Distinguished Name 
+ Create/Update Domain config with LDAP requires validation for User Bind 
Distinguished Name, User Tree Distinguished Name,Group Tree Distinguished Name

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1506062

Title:
  Create/Update Domain config with LDAP requires validation for User
  Bind Distinguished Name, User Tree Distinguished Name,Group Tree
  Distinguished Name

Status in Keystone:
  New

Bug description:
  Validation is required for the fields - user_tree_dn( User Tree
  Distinguished Name), group_tree_dn(Group Tree Distinguished Name ),
  user (User Bind Distinguished Name) for both create and update domain
  config APIs. Currently the following issues occur:

  1. If the user ("user bind name") contains invalid characters, then 
connection to the LDAP server for any of the operations fails.
  2. If the user_tree_dn contains invalid characters, then any operation on 
users for the LDAP server fails. eg. list all users
  3.  If the group_tree_dn contains invalid characters, then any operation on 
groups for the LDAP server fails. eg. list all groups

  
  We believe that there should be a check on these 3 attribute values for 
invalid characters for the following APIs:

  1. Create Domain config 
({{url}}/v3/domains/02ce011944aa4021b576c01e3c423d9f/config, PUT)
  2. Update Domain config 
({{url}}/v3/domains/02ce011944aa4021b576c01e3c423d9f/config, PATCH)

  
  The current API returns success even when these attribute values contain 
invalid characters from an LDAP perspective.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1506062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505996] [NEW] requirements conflicts causes keystone fail on keystone-all command

2015-10-14 Thread amir gohar
Public bug reported:

keystone_1 | 2015-10-14 10:26:23.024 13 CRITICAL keystone [-] 
ContextualVersionConflict: (WebOb 1.5.0 
(/usr/local/lib/python2.7/site-packages), 
Requirement.parse('WebOb<1.5.0,>=1.2.3'), set(['keystonemiddleware']))
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone Traceback (most recent 
call last):
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/bin/keystone-all", line 10, in 
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone sys.exit(main())
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/keystone/cmd/all.py", line 39, in main
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone 
eventlet_server.run(possible_topdir)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/keystone/server/eventlet.py", line 155, 
in run
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone 
startup_application_fn=create_servers)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/keystone/server/common.py", line 51, in 
setup_backends
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone res = 
startup_application_fn()
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/keystone/server/eventlet.py", line 146, 
in create_servers
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone admin_worker_count))
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/keystone/server/eventlet.py", line 64, 
in create_server
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone app = 
keystone_service.loadapp('config:%s' % conf, name)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/keystone/service.py", line 46, in 
loadapp
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone 
controllers.latest_app = deploy.loadapp(conf, name=name)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone return loadobj(APP, 
uri, name=name, **kw)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone return 
context.create()
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in 
create
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone return 
self.object_type.invoke(self)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone **context.local_conf)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in 
fix_call
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone val = 
callable(*args, **kw)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/urlmap.py", line 31, in 
urlmap_factory
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone app = 
loader.get_app(app_name, global_conf=global_conf)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone name=name, 
global_conf=global_conf).create()
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 362, in 
app_context
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone APP, name=name, 
global_conf=global_conf)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 450, in 
get_context
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone 
global_additions=global_additions)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 559, in 
_pipeline_app_context
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone APP, pipeline[-1], 
global_conf)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 454, in 
get_context
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone section)
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone   File 
"/usr/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 476, in 
_context_from_use
keystone_1 | 2015-10-14 10:26:23.024 13 ERROR keystone object_type, 
name=use, 

[Yahoo-eng-team] [Bug 1506289] [NEW] The first letter of error message should be capitalized

2015-10-14 Thread Hong Hui Xiao
Public bug reported:

Checking another problem and find this [1], the first letter should be
capitalized to keep readability and consistence.


def _respawn_action(self, service_id):
LOG.error(_LE("respawning %(service)s for uuid %(uuid)s"),
  {'service': service_id.service,
   'uuid': service_id.uuid})
self._monitored_processes[service_id].enable()


[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L250

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506289

Title:
  The first letter of error message should be capitalized

Status in neutron:
  New

Bug description:
  Checking another problem and find this [1], the first letter should be
  capitalized to keep readability and consistence.

  
  def _respawn_action(self, service_id):
  LOG.error(_LE("respawning %(service)s for uuid %(uuid)s"),
{'service': service_id.service,
 'uuid': service_id.uuid})
  self._monitored_processes[service_id].enable()

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L250

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476079] Re: Show and update method of pool's network and subnet add to LBaaS api v2

2015-10-14 Thread yaowei
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476079

Title:
  Show and update method of pool's network and subnet add to LBaaS api
  v2

Status in neutron:
  Invalid

Bug description:
  Pool is defined in loadbalancerv2.py, it has attributes network_id and
  subnet_id,but we can't see all these attributes by `neutron lbaas-
  pool-show` and modify these attributes by `neutron lbaas-pool-update`.
  I suggest to add these methods to lbaas api v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498957] Re: Add a 'dscp' field to security group rules to screen ingress traffic by dscp tag as well as IP address

2015-10-14 Thread Nate Johnston
We have decided that an alternate methodology is preferable, and will
file a new bug for a fresh RFE.  Changing status on this to 'invalid'
and abandoning the associated changeset.

** Changed in: neutron
   Status: Triaged => Invalid

** Changed in: neutron
 Assignee: James Reeves (james-reeves5546) => Nate Johnston (nate-johnston)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498957

Title:
  Add a 'dscp' field to security group rules to screen ingress traffic
  by dscp tag as well as IP address

Status in neutron:
  Invalid

Bug description:
  This change will add to the current security group model an additional
  option to allow for traffic to be restricted to a given DSCP tag in
  addition to the current IP address based restriction.  Incoming
  traffic would need to match both the IP address/CIDR block as well as
  the DSCP tag - if one is set.

  Changes:
  * DB model changes to add a DSCP tag column to security groups.
  * API changes to allow for DSCP tag configuration options to be supplied to 
security group API calls.
  * Neutron agent changes to implement configuring IPTables with the additional 
DSCP tag configuration.

  Note: This is complimentary functionality to the "QoS DSCP marking
  rule support" change which, when implemented, will provide Neutron
  with an interface to configure QoS policies to mark outgoing traffic
  with DSCP tags.  See also: QoS DSCP marking rule support:
  https://bugs.launchpad.net/neutron/+bug/1468353

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257354] Re: Metering doesn't anymore respect the l3 agent binding

2015-10-14 Thread Ryan Moats
Removing icehouse project, and marking incomplete to start the 60 day
cleanup timer - if still valid, please change the status and take
ownership

** No longer affects: neutron/icehouse

** Changed in: neutron
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257354

Title:
  Metering doesn't anymore respect the l3 agent binding

Status in neutron:
  Incomplete

Bug description:
  Since the old L3 mixin has been moved as a service plugin, the
  metering service plugin doesn't respect anymore the l3 agent binding.
  So instead of using the cast rpc method it uses the fanout_cast
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506054] [NEW] Can't retrieve hypervisor info - Nova Unexpected API error

2015-10-14 Thread Neil Jerram
Public bug reported:

This is a problem I'm seeing in a 3 node DevStack system, stacked today
(14th October 2015).  I click on Hypervisors in the System section of
the Horizon UI, to see hypervisor information.  Horizon shows an error
box "Unable to retrieve hypervisor information".

/var/log/apache2/horizon_error.log includes this:

2015-10-14 11:04:00.379206 REQ: curl -g -i 
'http://calico-vm18:8774/v2.1/3c8d3e6c106c44b6ad37140023b92df8/os-hypervisors/detail'
 -X GET -H "Accept: application/json" -H "User-Agent: python-novaclient" -H 
"X-Auth-Project-Id: 3c8d3e6c106c44b6ad37140023b92df8" -H "X-Auth-Token: 
{SHA1}db34b44f5be5800c12f48ab59251dc8277eb0b0c"
2015-10-14 11:04:00.420325 RESP: [500] {'content-length': '208', 
'x-compute-request-id': 'req-6881cfe9-5962-4105-bf46-2d164ef4451c', 'vary': 
'X-OpenStack-Nova-API-Version', 'connection': 'keep-alive', 
'x-openstack-nova-api-version': '2.1', 'date': 'Wed, 14 Oct 2015 11:04:00 GMT', 
'content-type': 'application/json; charset=UTF-8'}
2015-10-14 11:04:00.420358 RESP BODY: {"computeFault": {"message": "Unexpected 
API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.\\n", 
"code": 500}}
2015-10-14 11:04:00.420365
2015-10-14 11:04:00.420842 Recoverable error: Unexpected API Error. Please 
report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.

Otherwise the cluster appears to be working; for example I've
successfully launched 10 cirros instances, across the 3 available
compute nodes.

ubuntu@calico-vm18:/opt/stack/nova$ git log -1
commit 1d97b0e7308975cd4a912dda00df8726ba776fe7
Merge: 7200b6a 11e20dd
Author: Jenkins 
Date:   Wed Oct 14 08:50:59 2015 +

Merge "Ignore errorcode=4 when executing `cryptsetup remove`
command"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506054

Title:
  Can't retrieve hypervisor info - Nova Unexpected API error

Status in OpenStack Compute (nova):
  New

Bug description:
  This is a problem I'm seeing in a 3 node DevStack system, stacked
  today (14th October 2015).  I click on Hypervisors in the System
  section of the Horizon UI, to see hypervisor information.  Horizon
  shows an error box "Unable to retrieve hypervisor information".

  /var/log/apache2/horizon_error.log includes this:

  2015-10-14 11:04:00.379206 REQ: curl -g -i 
'http://calico-vm18:8774/v2.1/3c8d3e6c106c44b6ad37140023b92df8/os-hypervisors/detail'
 -X GET -H "Accept: application/json" -H "User-Agent: python-novaclient" -H 
"X-Auth-Project-Id: 3c8d3e6c106c44b6ad37140023b92df8" -H "X-Auth-Token: 
{SHA1}db34b44f5be5800c12f48ab59251dc8277eb0b0c"
  2015-10-14 11:04:00.420325 RESP: [500] {'content-length': '208', 
'x-compute-request-id': 'req-6881cfe9-5962-4105-bf46-2d164ef4451c', 'vary': 
'X-OpenStack-Nova-API-Version', 'connection': 'keep-alive', 
'x-openstack-nova-api-version': '2.1', 'date': 'Wed, 14 Oct 2015 11:04:00 GMT', 
'content-type': 'application/json; charset=UTF-8'}
  2015-10-14 11:04:00.420358 RESP BODY: {"computeFault": {"message": 
"Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.\\n", "code": 500}}
  2015-10-14 11:04:00.420365
  2015-10-14 11:04:00.420842 Recoverable error: Unexpected API Error. Please 
report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.

  Otherwise the cluster appears to be working; for example I've
  successfully launched 10 cirros instances, across the 3 available
  compute nodes.

  ubuntu@calico-vm18:/opt/stack/nova$ git log -1
  commit 1d97b0e7308975cd4a912dda00df8726ba776fe7
  Merge: 7200b6a 11e20dd
  Author: Jenkins 
  Date:   Wed Oct 14 08:50:59 2015 +

  Merge "Ignore errorcode=4 when executing `cryptsetup remove`
  command"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506049] [NEW] Typo in "help" in the python-glanceclient

2015-10-14 Thread dshakhray
Public bug reported:

ENVIRONMENT: devstack, python-glanceclient (master, 14.11.2015)
OS_IMAGE_API_VERSION=2

STEPS TO REPRODUCE:
execute the command: glance help

displays help http://paste.openstack.org/show/476242/

Typo `glance --os-image-api-version 1 helpNone` in last line

** Affects: python-glanceclient
 Importance: Undecided
 Assignee: dshakhray (dshakhray)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => dshakhray (dshakhray)

** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1506049

Title:
  Typo in "help" in the python-glanceclient

Status in python-glanceclient:
  New

Bug description:
  ENVIRONMENT: devstack, python-glanceclient (master, 14.11.2015)
  OS_IMAGE_API_VERSION=2

  STEPS TO REPRODUCE:
  execute the command: glance help

  displays help http://paste.openstack.org/show/476242/

  Typo `glance --os-image-api-version 1 helpNone` in last line

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1506049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457517] Re: Unable to boot from volume when flavor disk too small

2015-10-14 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 1:2015.1.1-0ubuntu2

---
nova (1:2015.1.1-0ubuntu2) vivid; urgency=medium

  [ Corey Bryant ]
  * d/rules: Prevent dh_python2 from guessing dependencies.

  [ Liang Chen ]
  * d/p/not-check-disk-size.patch: Fix booting from volume error
when flavor disk too small (LP: #1457517)

 -- Corey Bryant   Thu, 13 Aug 2015 15:13:43
-0400

** Changed in: nova (Ubuntu Vivid)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457517

Title:
  Unable to boot from volume when flavor disk too small

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Vivid:
  Fix Released

Bug description:
  [Impact]

   * Without the backport, booting from volume requires flavor disk size
  larger than volume size, which is wrong. This patch skips flavor disk
  size checking when booting from volume.

  [Test Case]

   * 1. create a bootable volume
 2. boot from this bootable volume with a flavor that has disk size smaller 
than the volume size
 3. error should be reported complaining disk size too small
 4. apply this patch
 5. boot from that bootable volume with a flavor that has disk size smaller 
than the volume size again
 6. boot should succeed

  [Regression Potential]

   * none

  
  Version: 1:2015.1.0-0ubuntu1~cloud0 on Ubuntu 14.04

  I attempt to boot an instance from a volume:

  nova boot --nic net-id=[NET ID] --flavor v.512mb --block-device
  source=volume,dest=volume,id=[VOLUME
  ID],bus=virtio,device=vda,bootindex=0,shutdown=preserve vm

  This results in nova-api raising a FlavorDiskTooSmall exception in the
  "_check_requested_image" function in compute/api.py. However,
  according to [1], the root disk limit should not apply to volumes.

  [1] http://docs.openstack.org/admin-guide-cloud/content/customize-
  flavors.html

  Log (first line is debug output I added showing that it's looking at
  the image that the volume was created from):

  2015-05-21 10:28:00.586 25835 INFO nova.compute.api 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] image: {'min_disk': 0, 'status': 
'active', 'min_ram': 0, 'properties': {u'container_format': u'bare', 
u'min_ram': u'0', u'disk_format': u'qcow2', u'image_name': u'Ubuntu 14.04 
64-bit', u'image_id': u'cf0dffef-30ef-4032-add0-516e88048d85', 
u'libvirt_cpu_mode': u'host-passthrough', u'checksum': 
u'76a965427d2866f006ddd2aac66ed5b9', u'min_disk': u'0', u'size': u'255524864'}, 
'size': 21474836480}
  2015-05-21 10:28:00.587 25835 INFO nova.api.openstack.wsgi 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] HTTP exception thrown: Flavor's disk is 
too small for requested image.

  Temporary solution: I have special flavor for volume-backed instances so I 
just set the root disk on those to 0, but this doesn't work if volume are used 
on other flavors.
  Reproduce: create flavor with 1 GB root disk size, then try to boot an 
instance from a volume created from an image that is larger than 1 GB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506076] [NEW] Allow connection tracking to be disabled per-port

2015-10-14 Thread Calum Loudon
Public bug reported:

This RFE is being raised in the context of this use case
https://review.openstack.org/#/c/176301/ from the TelcoWG.

OpenStack implements levels of per-VM security protection (security
groups, anti-spoofing rules).  If you want to deploy a trusted VM which
itself is providing network security functions, as with the above use
case, then it is often necessary to disable some of the native OpenStack
protection so as not to interfere with the protection offered by the VM
or use excessive host resources.

Neutron already allows you to disable security groups on a per-port
basis.  However, the Linux kernel will still perform connection tracking
on those ports.  With default Linux config, VMs will be severely scale
limited without specific host configuration of connection tracking
limits - for example, a Session Border Controller VM may be capable of
handling millions of concurrent TCP connections, but a default host
won't support anything like that.  This bug is therefore a RFE to
request that disabling security group function for a port further
disables kernel connection tracking for IP addresses associated with
that port.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506076

Title:
  Allow connection tracking to be disabled per-port

Status in neutron:
  New

Bug description:
  This RFE is being raised in the context of this use case
  https://review.openstack.org/#/c/176301/ from the TelcoWG.

  OpenStack implements levels of per-VM security protection (security
  groups, anti-spoofing rules).  If you want to deploy a trusted VM
  which itself is providing network security functions, as with the
  above use case, then it is often necessary to disable some of the
  native OpenStack protection so as not to interfere with the protection
  offered by the VM or use excessive host resources.

  Neutron already allows you to disable security groups on a per-port
  basis.  However, the Linux kernel will still perform connection
  tracking on those ports.  With default Linux config, VMs will be
  severely scale limited without specific host configuration of
  connection tracking limits - for example, a Session Border Controller
  VM may be capable of handling millions of concurrent TCP connections,
  but a default host won't support anything like that.  This bug is
  therefore a RFE to request that disabling security group function for
  a port further disables kernel connection tracking for IP addresses
  associated with that port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506234] [NEW] Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

2015-10-14 Thread Shraddha Pandhe
Public bug reported:

To give some context, calling destroy [5] was added as a bug fix [1]. It
was required back then because, Nova compute was not calling destroy on
catching the exception [2]. But now, Nova compute catches all exceptions
that happen during spawn and calls destroy (_shutdown_instance) [3]

Since Nova compute is already taking care of destroying the instance
before rescheduling, we shouldn't have to call destroy separately in the
driver. I confirmed in logs that destroy gets called twice if there is
any failure during _wait_for_active() [4] or timeout happens [5]


[1] https://review.openstack.org/#/c/99519/
[2] 
https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
[3] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
[4] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
[5] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

** Affects: nova
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New


** Tags: ironic

** Project changed: nova-hyper => nova

** Changed in: nova
 Assignee: (unassigned) => Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506234

Title:
  Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

Status in OpenStack Compute (nova):
  New

Bug description:
  To give some context, calling destroy [5] was added as a bug fix [1].
  It was required back then because, Nova compute was not calling
  destroy on catching the exception [2]. But now, Nova compute catches
  all exceptions that happen during spawn and calls destroy
  (_shutdown_instance) [3]

  Since Nova compute is already taking care of destroying the instance
  before rescheduling, we shouldn't have to call destroy separately in
  the driver. I confirmed in logs that destroy gets called twice if
  there is any failure during _wait_for_active() [4] or timeout happens
  [5]


  [1] https://review.openstack.org/#/c/99519/
  [2] 
https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
  [3] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
  [4] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
  [5] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503642] Re: Firewall-Update command help content does not display "admin_state_up" argument

2015-10-14 Thread James Arendt
The CLI help is in the python-neutronclient, not on the neutron server
side, so moved bug there.  Fix is in review
https://review.openstack.org/#/c/234916/


** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
Milestone: mitaka-1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503642

Title:
  Firewall-Update command help content does not display "admin_state_up"
  argument

Status in python-neutronclient:
  In Progress

Bug description:
  Get the help contents of the firewall-update command

  stack@stevens-creek:~/firewall$ neutron help firewall-update
  usage: neutron firewall-update [-h] [--request-format {json,xml}]
 [--policy POLICY]
 [--router ROUTER | --no-routers]
 FIREWALL

  Update a given firewall.

  positional arguments:
FIREWALL  ID or name of firewall to update.

  optional arguments:
-h, --helpshow this help message and exit
--request-format {json,xml}
  The XML or JSON request format.
--policy POLICY   Firewall policy name or ID.
--router ROUTER   Firewall associated router names or IDs (requires
  FWaaS router insertion extension, this option can be
  repeated)
--no-routers  Associate no routers with the firewall (requires FWaaS
  router insertion extension)
  stack@stevens-creek:~/firewall$

  Issue :
  --admin_state_up argument does not display in the contents

  Expected:

  Help contents should have  --admin_state_up argumment since we are
  able to make firewall UP and DOWN by updating the --admin_state_up
  toTrue/False

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1503642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp