[Yahoo-eng-team] [Bug 1538411] Re: exception.AutoDiskConfigDisabledByImage is expected 2 times in _resize

2016-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272913
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b08e6e7a83565a13b0d8747d21d4e0c60eaebcb3
Submitter: Jenkins
Branch:master

commit b08e6e7a83565a13b0d8747d21d4e0c60eaebcb3
Author: Eli Qiao 
Date:   Wed Jan 27 14:15:39 2016 +0800

API: Rearrange HTTPBadRequest raising in _resize

This patch aimed to fix AutoDiskConfigDisabledByImage exception is
excpeced for 2 times, besides, rearrange HTTPBadRequest to group them
around each other for easy reading.

P.S. Don't touch any logic so no UT need to be added/changed.

Closes-Bug: #1538411
Change-Id: I2088322681edd4a3190047560ea99d31c5f3a551


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538411

Title:
  exception.AutoDiskConfigDisabledByImage is expected 2 times in _resize

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  exception.AutoDiskConfigDisabledByImage is expected 2 times in _resize

  Please check:
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L912
  and 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L928

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540779] [NEW] DVR router should not allow manually removed from an agent in 'dvr' mode

2016-02-01 Thread ZongKai LI
Public bug reported:

Per bp/improve-dvr-l3-agent-binding, command "neutron 
l3-agent-list-hosting-router ROUTER" couldn't show bindings for DVR routers on 
agents in 'dvr' mode now.
It's good to hide the implicit *binding* between DVR router and agent in 'dvr' 
mode, for DVR routers should come and go as dvr serviced port on host come and 
go, not for manually managed.
But it still be possible to run "neutron l3-agent-router-remove AGENT ROUTER" 
to remove a DVR router from an agent in 'dvr' mode. This will make DVR router 
namespace deleted, and l3 networking on that node crashed.
We should add a checking for removing router from agent in 'dvr' mode, and 
forbidden processing going on.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540779

Title:
  DVR router should not allow manually removed from an agent in 'dvr'
  mode

Status in neutron:
  New

Bug description:
  Per bp/improve-dvr-l3-agent-binding, command "neutron 
l3-agent-list-hosting-router ROUTER" couldn't show bindings for DVR routers on 
agents in 'dvr' mode now.
  It's good to hide the implicit *binding* between DVR router and agent in 
'dvr' mode, for DVR routers should come and go as dvr serviced port on host 
come and go, not for manually managed.
  But it still be possible to run "neutron l3-agent-router-remove AGENT ROUTER" 
to remove a DVR router from an agent in 'dvr' mode. This will make DVR router 
namespace deleted, and l3 networking on that node crashed.
  We should add a checking for removing router from agent in 'dvr' mode, and 
forbidden processing going on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540764] [NEW] build instance fail with sriov port on child cell

2016-02-01 Thread likun
Public bug reported:

Version:
Kilo
Liberty

When I was trying to build instance with sriov port on my child cell, I used 
the following command:
nova boot --image 6df3b3d4-3c9e-4772-9f18-b6f42c6a9c77 --flavor 3 --nic 
port-id=61c20d21-43fe-487d-9296-893172cb725f --hint 
target_cell='api_cell!child_cell' sr-vxlan_vm30

But spawning failed on child_cell compute node. Here are error
information in nova-compute.log:

2016-01-14 11:00:17.246 2908 ERROR nova.compute.manager 
[req-9d7fafab-ea2a-43c3-a201-4234db29c572 8a9704e00c44452590c6c8d014ed028a 
452b30922b3f42b59743158f695264c9 - - -] [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] Instance failed to spawn
016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] Traceback (most recent call last):
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2660, in 
_build_resources
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] yield resources
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2532, in 
_build_and_run_instance
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] block_device_info=block_device_info)
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2879, in 
spawn
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] write_to_disk=True)
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4844, in 
_get_guest_xml
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] memory_backup_file=memory_backup_file)
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4675, in 
_get_guest_config
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] network_info, virt_type)
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4818, in 
_get_guest_vif_config
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] instance, vif, image_meta, flavor, 
virt_type)
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 376, in 
get_config
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] inst_type, virt_type)
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 292, in 
get_config_hw_veb
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] conf, net_type, profile["pci_slot"],
2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] KeyError: 'pci_slot'


In function _create_instances_here() in nova/cells/scheduler.py of Kilo
and Liberty, I found this:

# FIXME(danms): The instance was brutally serialized before being
# sent over RPC to us. Thus, the pci_requests value wasn't really
# sent in a useful form. Since it was getting ignored for cells
# before it was part of the Instance, skip it now until cells RPC
# is sending proper instance objects.
instance_values.pop('pci_requests', None)

The "pci_requests" is discarded by cell scheduler, because of unuseful
instance form.This is the causation of above error.

** Affects: nova
 Importance: Undecided
 Assignee: likun (li-kun130)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => likun (li-kun130)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540764

Title:
  build instance fail with sriov port on child cell

Status in OpenStack Compute (nova):
  New

Bug description:
  Version:
  Kilo
  Liberty

  When I was trying to build instance with sriov port on my child cell, I used 
the following command:
  nova boot --image 6df3b3d4-3c9e-4772-9f18-b6f42c6a9c77 --flavo

[Yahoo-eng-team] [Bug 1540763] [NEW] oslo.db emits deprecation warnings

2016-02-01 Thread Steve Martinelli
Public bug reported:

our py34 tests are littered with deprecation warnings from oslo.db:

./opt/stack/keystone/.tox/py34/lib/python3.4/site-
packages/oslo_db/sqlalchemy/enginefacade.py:1056:
OsloDBDeprecationWarning: EngineFacade is deprecated; please use
oslo_db.sqlalchemy.enginefacade

here's a logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22EngineFacade%20is%20deprecated%5C%22

** Affects: keystone
 Importance: Undecided
 Status: New

** Changed in: keystone
Milestone: None => mitaka-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1540763

Title:
  oslo.db emits deprecation warnings

Status in OpenStack Identity (keystone):
  New

Bug description:
  our py34 tests are littered with deprecation warnings from oslo.db:

  ./opt/stack/keystone/.tox/py34/lib/python3.4/site-
  packages/oslo_db/sqlalchemy/enginefacade.py:1056:
  OsloDBDeprecationWarning: EngineFacade is deprecated; please use
  oslo_db.sqlalchemy.enginefacade

  here's a logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22EngineFacade%20is%20deprecated%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1540763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540755] [NEW] The edit image should be changed to update image

2016-02-01 Thread space
Public bug reported:

 The edit image should be changed to update image.

** Affects: horizon
 Importance: Undecided
 Assignee: space (fengzhr)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => space (fengzhr)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540755

Title:
  The edit image should be changed to update image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
   The edit image should be changed to update image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540748] [NEW] ml2: port_update and port_delete should not use faout notify

2016-02-01 Thread shihanzhang
Public bug reported:

Now for ml2 plugin,  neutron-server use faout RPC  message for port_update and 
port_delete, the codes as below:
def port_update(self, context, port, network_type, segmentation_id, 
 physical_network):
cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
cctxt.cast(context, 'port_update', port=port,
   network_type=network_type,
   segmentation_id=segmentation_id,
   physical_network=physical_network)

def port_delete(self, context, port_id):
cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
cctxt.cast(context, 'port_delete', port_id=port_id)

I think neutron-server should directly sends the RPC message to port's
binding_host, this can offload work for AMQP

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Description changed:

  Now for ml2 plugin,  neutron-server use faout RPC  message for port_update 
and port_delete, the codes as below:
- def port_update(self, context, port, network_type, segmentation_id,
- physical_network):
- cctxt = self.client.prepare(topic=self.topic_port_update,
- fanout=True)
- cctxt.cast(context, 'port_update', port=port,
-network_type=network_type, segmentation_id=segmentation_id,
-physical_network=physical_network)
+ def port_update(self, context, port, network_type, segmentation_id, 
physical_network):
+ cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
+ cctxt.cast(context, 'port_update', port=port,
+    network_type=network_type,
+   segmentation_id=segmentation_id,
+    physical_network=physical_network)
  
- def port_delete(self, context, port_id):
- cctxt = self.client.prepare(topic=self.topic_port_delete,
- fanout=True)
- cctxt.cast(context, 'port_delete', port_id=port_id)
+ def port_delete(self, context, port_id):
+ cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
+ cctxt.cast(context, 'port_delete', port_id=port_id)
  
  I think neutron-server should directly sends the RPC message to port's
  binding_host, this can offload work for AMQP

** Description changed:

  Now for ml2 plugin,  neutron-server use faout RPC  message for port_update 
and port_delete, the codes as below:
- def port_update(self, context, port, network_type, segmentation_id, 
physical_network):
+ def port_update(self, context, port, network_type, segmentation_id,   
   physical_network):
  cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
  cctxt.cast(context, 'port_update', port=port,
     network_type=network_type,
-   segmentation_id=segmentation_id,
+    segmentation_id=segmentation_id,
     physical_network=physical_network)
  
  def port_delete(self, context, port_id):
  cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
  cctxt.cast(context, 'port_delete', port_id=port_id)
  
  I think neutron-server should directly sends the RPC message to port's
  binding_host, this can offload work for AMQP

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540748

Title:
  ml2: port_update and port_delete should not use faout notify

Status in neutron:
  New

Bug description:
  Now for ml2 plugin,  neutron-server use faout RPC  message for port_update 
and port_delete, the codes as below:
  def port_update(self, context, port, network_type, segmentation_id,   
   physical_network):
  cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
  cctxt.cast(context, 'port_update', port=port,
     network_type=network_type,
     segmentation_id=segmentation_id,
     physical_network=physical_network)

  def port_delete(self, context, port_id):
  cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
  cctxt.cast(context, 'port_delete', port_id=port_id)

  I think neutron-server should directly sends the RPC message to port's
  binding_host, this can offload work for AMQP

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540745] [NEW] Style: Material: Better Inverse Dropdown Header Color

2016-02-01 Thread Diana Whitten
Public bug reported:

Right now, Bootswatch's 'paper' sets the color of the .dropdown-header
to be a light gray regardless of whether its an inverse navbar.  This is
not ideal.  It should inherit the same color.

https://i.imgur.com/mfYHTMf.png

** Affects: horizon
 Importance: Wishlist
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

** Changed in: horizon
   Importance: Undecided => Wishlist

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

** Changed in: horizon
Milestone: None => mitaka-3

** Changed in: horizon
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540745

Title:
  Style: Material: Better Inverse Dropdown Header Color

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Right now, Bootswatch's 'paper' sets the color of the .dropdown-header
  to be a light gray regardless of whether its an inverse navbar.  This
  is not ideal.  It should inherit the same color.

  https://i.imgur.com/mfYHTMf.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540728] [NEW] fix hiding of domain and region fields on login screen

2016-02-01 Thread Dan Nguyen
Public bug reported:

When Horizon is  configured with WebSSO enabled the domain field and
region field should be hidden when the selected auth_type NOT "Keystone
Credentials".

Steps to Reproduce
--
1. Edit the local_settings.py to have the following

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

WEBSSO_ENABLED = True
WEBSSO_CHOICES = (
 ("credentials", "Keystone Credentials"),
 ("saml2", "ADFS Credentials"),
 )

* In addition to Keystone V3 settings.

2. Login into Horizon
3. Select 'Keystone Credentials' from the dropdown control.
4. Observe that the domain field is present along with the name and password 
fields.
5. Select 'ADFS Credentials'
6. Observe that the domain field is present.

Expected Behavior

Selecting 'ADFS Credentials' or anything other than 'Keystone Credentials' 
should hide all the input fields including domain and region.

** Affects: horizon
 Importance: Undecided
 Assignee: Dan Nguyen (daniel-a-nguyen)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Dan Nguyen (daniel-a-nguyen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540728

Title:
  fix hiding of domain and region fields on login screen

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When Horizon is  configured with WebSSO enabled the domain field and
  region field should be hidden when the selected auth_type NOT
  "Keystone Credentials".

  Steps to Reproduce
  --
  1. Edit the local_settings.py to have the following

  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

  WEBSSO_ENABLED = True
  WEBSSO_CHOICES = (
   ("credentials", "Keystone Credentials"),
   ("saml2", "ADFS Credentials"),
   )

  * In addition to Keystone V3 settings.

  2. Login into Horizon
  3. Select 'Keystone Credentials' from the dropdown control.
  4. Observe that the domain field is present along with the name and password 
fields.
  5. Select 'ADFS Credentials'
  6. Observe that the domain field is present.

  Expected Behavior
  
  Selecting 'ADFS Credentials' or anything other than 'Keystone Credentials' 
should hide all the input fields including domain and region.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1111192] Re: quantumclient unit test: Declare less variables when possible

2016-02-01 Thread Launchpad Bug Tracker
[Expired for python-neutronclient because there has been no activity for
60 days.]

** Changed in: python-neutronclient
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/192

Title:
  quantumclient unit test: Declare less variables when possible

Status in neutron:
  Won't Fix
Status in python-neutronclient:
  Expired

Bug description:
  In quantumclient/tests/unit/lb/test_cli20_pool.py  (as an example) we have 
the following code:
  def test_create_pool_with_mandatory_params(self):
  """lb-pool-create with mandatory params only"""
  resource = 'pool'
  cmd = pool.CreatePool(test_cli20.MyApp(sys.stdout), None)
  name = 'my-name'
  lb_method = 'round-robin'
  protocol = 'http'
  subnet_id = 'subnet-id'
  tenant_id = 'my-tenant'
  my_id = 'my-id'
  args = ['--lb-method', lb_method,
  '--name', name,
  '--protocol', protocol,
  '--subnet-id', subnet_id,
  '--tenant-id', tenant_id]
  position_names = ['admin_state_up', 'lb_method', 'name',
'protocol', 'subnet_id', 'tenant_id']
  position_values = [True, lb_method, name,
 protocol, subnet_id, tenant_id]
  self._test_create_resource(resource, cmd, name, my_id, args,
 position_names, position_values)

   -- This is a pattern in the load balancing tests in quantumclient. --
  The proposal below will cover the 'simple' cases and might need some 
modifications in other cases


  We are able to implement this code in a more "economic" way. The main
  point us to define 'args' as dict and derive rest of the data we need
  from this dict.

  A working example below.
  -
  def create_args(args):
  position_names = args.keys()
  position_values = args.values()
  args_list = []
  for k,v in args.iteritems():
  args_list.append("--" + k)
  args_list.append(v)
  return position_names, position_values, args_list
  def main():
  args = {}
  args["name"] = "my-name"
  args["description"] = "my-description"
  args["address"] = "10.0.0.2"
  args["admin_state"] = False
  args["connection_limit"] = 1000
  args["port"] = 80
  args["subnet_id"] = "subnet_id"
  args["tenant_id"] = "my-tenant"
  position_names, position_values, args_list  = create_args(args)
  print position_names
  print position_values
  print args_list
  if __name__ == "__main__":
  main()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467471] Re: RFE - Support Distributed SNAT with DVR

2016-02-01 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467471

Title:
  RFE - Support Distributed SNAT with DVR

Status in neutron:
  Expired

Bug description:
  In Juno release, DVR was implemented to Neutron.
  So virtual router can running on each Compute node.
  However, SNAT is still running on Network node and not distributed yet.

  Our proposal is to distribute SNAT to each Compute node.
  If we use SNAT feature, the packet doesn't need to go Network node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493512] Re: n-cpu misreports shared storage when attempting to block migrate

2016-02-01 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493512

Title:
  n-cpu misreports shared storage when attempting to block migrate

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Nova is not allowing any block-migrate transactions to occur due to
  false positives when testing for shared storage. This is occurring
  with even a simple two node setup.

  
  --Original Entry--

  If using an environment where there are both nodes that share the nova
  instances directory and nodes that do NOT share the directory, any
  attempt to block-migrate to non-shared nodes from the controller node
  will fail.

  Example:
  1. Create an environment with one controller node and two cpu nodes.
  2. Have the controller node act as an NFS server by adding the instances 
directory to /etc/exports/
  3. Add this new share to the fstab of just one of the CPU nodes.
  4. Mount the new share after stacking the appropriate CPU node.
  5. Stack the unshared CPU node.

  The next step applied to my scenario, but may not be necessary.
  6. Create a bootable volume in Cinder.
  7. Launch an instance from that volume, using a flavor that does NOT create a 
local disk. (This landed on the controller node in my scenario)
  8. Once the instance is running, attempt to block-migrate to the unshared 
node. (nova live-migration --block-migrate  )

  In the past it was possible to block-migrate to and from the unshared
  node, then migrate without block between the shared nodes. Now in
  Liberty (master) the following errors appears:

  2015-09-08 12:09:17.052 ERROR oslo_messaging.rpc.dispatcher 
[req-2f320d08-ce60-4f5c-bfa3-c044246a3a18 admin admin] Exception during message 
handling: cld5b12 is not on local storage: Block migration can not be used with 
shared storage.
  Traceback (most recent call last):

    File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  executor_callback))

    File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  executor_callback)

    File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  result = func(ctxt, **new_args)

    File "/opt/stack/nova/nova/exception.py", line 89, in wrapped
  payload)

    File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
195, in __exit__
  six.reraise(self.type_, self.value, self.tb)

    File "/opt/stack/nova/nova/exception.py", line 72, in wrapped
  return f(self, context, *args, **kw)

    File "/opt/stack/nova/nova/compute/manager.py", line 399, in 
decorated_function
  return function(self, context, *args, **kwargs)

    File "/opt/stack/nova/nova/compute/manager.py", line 377, in 
decorated_function
  kwargs['instance'], e, sys.exc_info())

    File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
195, in __exit__
  six.reraise(self.type_, self.value, self.tb)

    File "/opt/stack/nova/nova/compute/manager.py", line 365, in 
decorated_function
  return function(self, context, *args, **kwargs)

    File "/opt/stack/nova/nova/compute/manager.py", line 4929, in 
check_can_live_migrate_source
  block_device_info)

    File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5169, in 
check_can_live_migrate_source
  raise exception.InvalidLocalStorage(reason=reason, path=source)

  InvalidLocalStorage: cld5b12 is not on local storage: Block migration
  can not be used with shared storage.

  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 89, in wrapped
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher payload)
  2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   

[Yahoo-eng-team] [Bug 1497396] Re: Network creation and Router Creation times degrade with large number of instances

2016-02-01 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497396

Title:
  Network creation and Router Creation times degrade with large number
  of instances

Status in neutron:
  Expired

Bug description:
  We are trying to analyze why creation of routers and networks degrades
  when there is a large number of instances of these. Running cprofile
  on the L3 agent indicated that ensure_namespace function seems to
  degrade when a large number of namespaces are present.

  Looking through the code all the namespaces are listed with the "ip
  netns list" command and then compared against the one that is of
  interest. This scales badly since with large number if instances the
  number of comparisons increases.

  An alternate way to achieve the same result could be to check for the
  desired namespace ("ls /var/run/netns/qrouter-") or to run a
  command (maybe the date commands?)  and check the response.

  Either method described above would have a constant time for
  execution, rather than the linear time as seen presently.

  Thanks,
  -Uday

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513144] Re: [RFE] Allow admin to mark agents down

2016-02-01 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513144

Title:
  [RFE] Allow admin to mark agents down

Status in neutron:
  Expired

Bug description:
  Cloud administrators have monitoring systems externally placed
  watching different types of resources of their cloud infrastructures.
  A cloud infrastructure is comprehended not exclusively by an OpenStack
  instance but also other components not managed by and possibly not
  visible to OpenStack such as SDN controller, physical network
  elements, etc.

  External systems may detect a fault on one of multiple of
  infrastructure resources that subsequently may affect services being
  provided by OpenStack. From a network perspective, an example of a
  fault can be the crashing of openvswitch on a compute node.

  When using the reference implementation (ovs + neutron-l2-agent),
  neutron-l2-agent will continue reporting to the Neutron server its
  state as alive (there's heartbeat; service's up ), although there's an
  internal error caused by unreachability to the virtual bridge (br-
  int). By means of external tools to OpenStack monitoring openvswitch,
  the administrator knows there's something wrong and as a fault
  management action he may want to explicitly set the agent state down.

  Such action requires a new API exposed by Neutron allowing admins to
  set (true/false) the aliveness state of Neutron agents.

  This feature request goes in line with the work proposed to Nova [1]
  and implemented in Liberty. The same is also being currently proposed
  to Cinder [2]

  [1] https://blueprints.launchpad.net/nova/+spec/mark-host-down
  [2] https://blueprints.launchpad.net/cinder/+spec/mark-services-down

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498522] Re: Some VPN service is "pending" status after vpn connection status is "active"

2016-02-01 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498522

Title:
  Some VPN service is "pending" status after vpn connection status is
  "active"

Status in neutron:
  Expired

Bug description:
  Steps:
  1. controller + 2 compute setup. VPN running on compute nodes
  2. create tenant/network/subnet/router/vpn setting
  3. create multiple vpn session and corresponding vpn connections. check on 
dashboard, found some vpn service is pending status. 

  Juno version

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540719] [NEW] Integration tests browser maximise makes working on tests painful

2016-02-01 Thread Richard Jones
Public bug reported:

Currently the webdriver for the integration test suite maximises the
browser, making it difficult to work with other windows during a test
run. We should investigate whether the maximisation is even necessary,
at least, with a view to turning it off permanently.

** Affects: horizon
 Importance: Undecided
 Assignee: Richard Jones (r1chardj0n3s)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540719

Title:
  Integration tests browser maximise makes working on tests painful

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently the webdriver for the integration test suite maximises the
  browser, making it difficult to work with other windows during a test
  run. We should investigate whether the maximisation is even necessary,
  at least, with a view to turning it off permanently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540254] Re: "#flake8: noqa" is using incorrectly

2016-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274548
Committed: 
https://git.openstack.org/cgit/openstack/python-heatclient/commit/?id=3298ad98c88309c0aab972284b2a403573224c94
Submitter: Jenkins
Branch:master

commit 3298ad98c88309c0aab972284b2a403573224c94
Author: Bo Wang 
Date:   Mon Feb 1 16:32:12 2016 +0800

Remove incorrectly used "# flake8: noqa"

"# flake8: noqa" option disables all checks for the whole file.
To disable one line we should use "# noqa".
Remove unused "# flake8: noqa" and fix hidden hacking errors.

Change-Id: I624e96784ae7a22b5a5766f38bdf1fb8ade2d0a2
Closes-Bug: #1540254


** Changed in: python-heatclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540254

Title:
  "#flake8: noqa" is using incorrectly

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-novaclient:
  In Progress

Bug description:
  "# flake8: noqa" option disables all checks for the whole file. To
  disable one line we should use "# noqa".

  Refer to: https://pypi.python.org/pypi/flake8
  
https://github.com/openstack/python-keystoneclient/commit/3b766c51438396a0ab0032de309c9d56e275e0cb

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1017606] Re: Mixing references to 'Tenants' and 'Projects' is confusing

2016-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/273757
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=18e926410214149a3655543aae97e193c4a4b528
Submitter: Jenkins
Branch:master

commit 18e926410214149a3655543aae97e193c4a4b528
Author: Steve Martinelli 
Date:   Thu Jan 28 16:10:04 2016 -0500

replace tenant with project in cli.py

Change-Id: Ic14efc179f61f310191ff3e5639f06fc1f2fe4aa
Closes-Bug: 1017606


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1017606

Title:
  Mixing references to 'Tenants' and 'Projects' is confusing

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In regards to this :  https://bugs.launchpad.net/horizon/+bug/909495

  Contrary to what must have been thought at the time.  Users do care
  what their tenant is.  As it is a necessary component of env data and
  any auth request to the API, they tend to look first and foremost in
  Horizon for information about their tenant.  However, because horizon
  habitually refers to the tenant as a 'project' they are left wondering
  where the heck their tenant name is.

  As the concept of projects died with essex, can we please update the
  horizon interface to reflect the realities of our users needs as we
  have forced them upon them?  and maybe provide them the information
  they need to use the stack without confusion?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1017606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540682] [NEW] InstanceNotFound during instance usage audit

2016-02-01 Thread Corey Wright
Public bug reported:

tl;dr set read_deleted to "yes" on the admin context used by the
periodic _instance_usage_audit() so info can be retrieved on deleted
instances.

Explanation:

nova.compute.manager.Manager._instance_usage_audit() has a bug where,
though get_active_by_window_joined() properly returned a list of
instances including deleted ones (because it's implicit that
read_deleted must be set to "yes" for that function to correctly
operate), notify_usage_exists() indirectly uses the more generic
get_flavor() (which cannot assume read_deleted should be "yes" to
properly operate and therefor defaults to "no") to report on each
instance without setting read_deleted to "yes".  this was masked until
commit 1337890a was merged which removed compatibility code that
retrieved the flavor if InstanceNotFound was encountered and the
instance was deleted.

Nova version: git HEAD since commit 1337890a

Log excerpt:

2016-01-14 00:01:51.112 18707 DEBUG nova.objects.instance 
[req-66e4a380-ed24-4ca1-a971-ee1d80878e2e - - - - -] Lazy-loading `flavor' on 
Instance uuid 85388355-6f83-4bca-9a11-a0914a2e3a3a obj_load_attr 
/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/objects/instance.py:843
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager 
[req-66e4a380-ed24-4ca1-a971-ee1d80878e2e - - - - -] [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] Failed to generate usage audit for 
instance on host c-10-13-135-245
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] Traceback (most recent call last):
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/compute/manager.py",
 line 6036, in _instance_usage_audit
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] include_bandwidth=True)
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/compute/utils.py",
 line 297, in notify_usage_exists
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] system_metadata=system_metadata, 
extra_usage_info=extra_info)
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/compute/utils.py",
 line 317, in notify_about_instance_usage
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] network_info, system_metadata, 
**extra_usage_info)
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/notifications.py",
 line 386, in info_from_instance
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] instance_type = instance.get_flavor()
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/objects/instance.py",
 line 871, in get_flavor
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] return getattr(self, attr)
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 67, in getter
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] self.obj_load_attr(name)
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/objects/instance.py",
 line 861, in obj_load_attr
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] self._load_flavor()
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/nova/objects/instance.py",
 line 754, in _load_flavor
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a] expected_attrs=['flavor', 
'system_metadata'])
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a11-a0914a2e3a3a]   File 
"/opt/rackstack/rackstack.497.5/nova/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 179, in wrapper
2016-01-14 00:01:51.181 18707 ERROR nova.compute.manager [instance: 
85388355-6f83-4bca-9a1

[Yahoo-eng-team] [Bug 1524576] Re: Enhance ML2 Type Driver interface to include network ID

2016-02-01 Thread Mitchell Jameson
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524576

Title:
  Enhance ML2 Type Driver interface to include network ID

Status in neutron:
  Invalid

Bug description:
  An enhancement is needed to the ML2 Type Driver interface to include
  network ID. This allows the vendors the flexibility on their back-end
  systems to group or associate resources based upon the network ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540633] Re: Release networking-infoblox 2.0.0

2016-02-01 Thread John Belamaric
** Changed in: networking-infoblox
   Status: New => Fix Released

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540633

Title:
  Release networking-infoblox 2.0.0

Status in networking-infoblox:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  networking-infoblox is ready for release of the 2.0.0 version of the
  driver

  tag 2.0.0

  master @ 22dd683e1aa9b9a43de9af87517694c9783807e5

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-infoblox/+bug/1540633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540555] Re: Nobody listens to network delete notifications

2016-02-01 Thread Eugene Nikanorov
It's correct observation, Nate.
Linux bridge agent uses this notification to delete network's bridge unlike OVS 
agent which only needs port delete notifications.
I think it's not a big deal if notifications are just lost if nobody consumes 
them.

Closing as invalid.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540555

Title:
  Nobody listens to network delete notifications

Status in neutron:
  Invalid

Bug description:
  Here it can be seen that agents are notified of network delete event:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/rpc.py#L304

  But on the agent side only network update events are being listened:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L374

  That was uncovered during testing of Pika driver, because it does not
  allow to send messages to queues which do not exist, unlike current
  Rabbit (Kombu) driver. That behaviour will probably be changed in Pika
  driver, but still it is worthy to get rid of unnecessary notifications
  on Neutron side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540633] [NEW] Release networking-infoblox 2.0.0

2016-02-01 Thread John Belamaric
Public bug reported:

networking-infoblox is ready for release of the 2.0.0 version of the
driver

tag 2.0.0

master @ 61919945158f980eb11d82a6c26893ff04d4e869

** Affects: networking-infoblox
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540633

Title:
  Release networking-infoblox 2.0.0

Status in networking-infoblox:
  New
Status in neutron:
  New

Bug description:
  networking-infoblox is ready for release of the 2.0.0 version of the
  driver

  tag 2.0.0

  master @ 61919945158f980eb11d82a6c26893ff04d4e869

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-infoblox/+bug/1540633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540565] [NEW] lbaas v2 dashboard uses unclear "Admin State Up" terminology

2016-02-01 Thread Doug Fish
Public bug reported:

The lbaas v2 Horizon plugin at https://github.com/openstack/neutron-
lbaas-dashboard/ uses the phrase "Admin State Up" Yes/No. It seems
clearer to change this terminology to "Admin State: Up (or Down)" as
suggested in this code review:
https://review.openstack.org/#/c/259142/6/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540565

Title:
  lbaas v2 dashboard uses unclear "Admin State Up" terminology

Status in neutron:
  New

Bug description:
  The lbaas v2 Horizon plugin at https://github.com/openstack/neutron-
  lbaas-dashboard/ uses the phrase "Admin State Up" Yes/No. It seems
  clearer to change this terminology to "Admin State: Up (or Down)" as
  suggested in this code review:
  https://review.openstack.org/#/c/259142/6/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419825] Re: sample_data script fails when run against ssl enabled keystone

2016-02-01 Thread Morgan Fainberg
With Eventlet and PKI tokens disappearing this bug is being marked as
"wont fix". There is nothing left to do.

Thanks for looking at it and working on it, sorry for letting it linger
as long as it has.

** Changed in: keystone
   Status: In Progress => Won't Fix

** Changed in: keystone
 Assignee: Sean Perry (sean-perry-a) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1419825

Title:
  sample_data script fails when run against ssl enabled keystone

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  When running the sample data script against ssl enabled keystone
  warnings concerning the subjectAltName are returned by the service
  (something like https://github.com/shazow/urllib3/issues/497), a test
  call outside of the script succeeds but returns the following:

  ...urllib3/connection.py:251: SecurityWarning: Certificate has no 
`subjectAltName`, falling back to check for a `commonName` for now. This 
feature is being removed by major browsers and deprecated by RFC 2818. (See 
https://github.com/shazow/urllib3/issues/497 for details.)
    SecurityWarning
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |Dummy tenant test |
  |   enabled   |   True   |
  |  id | f1c91620ddd44b1aa19514ff199b44e3 |
  | name|extra_demo|
  |  parent_id  |  |
  +-+--+

  this makes it impossible for the id to be ascertained on the first
  request and subsequent calls hang or fail.

  This is using Python 2.7.6 in a virtual env on Xubuntu Trusty.  The
  virtual env initialised with pip requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1419825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540555] [NEW] Nobody listens to network delete notifications

2016-02-01 Thread Dmitry Mescheryakov
Public bug reported:

Here it can be seen that agents are notified of network delete event:
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/rpc.py#L304

But on the agent side only network update events are being listened:
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L374

That was uncovered during testing of Pika driver, because it does not
allow to send messages to queues which do not exist, unlike current
Rabbit (Kombu) driver. That behaviour will probably be changed in Pika
driver, but still it is worthy to get rid of unnecessary notifications
on Neutron side.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540555

Title:
  Nobody listens to network delete notifications

Status in neutron:
  New

Bug description:
  Here it can be seen that agents are notified of network delete event:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/rpc.py#L304

  But on the agent side only network update events are being listened:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L374

  That was uncovered during testing of Pika driver, because it does not
  allow to send messages to queues which do not exist, unlike current
  Rabbit (Kombu) driver. That behaviour will probably be changed in Pika
  driver, but still it is worthy to get rid of unnecessary notifications
  on Neutron side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538746] Re: tempest v2 scenario tests not passing config to serviceclient and support netcat using cirros 0.3.3 image

2016-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/273262
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=e99b46953ade6d32ce660ae42f6b8d1e13f51624
Submitter: Jenkins
Branch:master

commit e99b46953ade6d32ce660ae42f6b8d1e13f51624
Author: madhusudhan-kandadai 
Date:   Wed Jan 27 14:16:56 2016 -0800

Set config values and fix casual failing of nc service in scenario tests

1. Set the config values to make it work against non-devstack environment
2  Tweak webserver script, use nc "l" for listening to avoid clients to hang
waiting for the connection close after the content has been served when
using cirros 0.3.3 image. In Cirros 0.3.3, nc doesn't support "-ll" option
for persistence listening

Change-Id: Iecf70014f9444b226a0b08971e1c2ac26f653257
Closes-Bug: #1538746


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1538746

Title:
  tempest v2 scenario tests not passing config to serviceclient and
  support netcat using cirros 0.3.3 image

Status in neutron:
  Fix Released

Bug description:
  1. neutron-lbaas tempest v2  scenario tests doesn't pick config values
  from standard tempest.conf. Instead, the config values are hardcoded
  in the scenario/base.py. For example,

self.client_args = [auth_provider, 'network',
  'regionOne']

  The above values doesn't work on non-devstack environment. Setting and
  pick the config values from tempest.conf works both devstack and non-
  devstack environment.

  2. netcat doesnt support persistent listening 'll' option on cirros
  0.3.3. image. hence change it to 'l' listening option, so the cloud
  can use any cirros image either 3.3/3.4. Also, change the behavior of
  the netcat script to make it work against 3.3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1538746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540525] [NEW] DB diagram is not displayed in the documentation

2016-02-01 Thread Victor Morales
Public bug reported:

Having  the current database diagram in the documentation is something
necessary for developers, operators and testers.  Given the nature of
the project and the constantly changes it's difficult to track changes
and keep them in the repository, therefore the process to created this
schema must be automated.

Neutron is not the only project that lacks of this documentation but
it's a good candidate to start this initiative.

** Affects: neutron
 Importance: Undecided
 Assignee: Victor Morales (electrocucaracha)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Victor Morales (electrocucaracha)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540525

Title:
  DB diagram is not displayed in the documentation

Status in neutron:
  New

Bug description:
  Having  the current database diagram in the documentation is something
  necessary for developers, operators and testers.  Given the nature of
  the project and the constantly changes it's difficult to track changes
  and keep them in the repository, therefore the process to created this
  schema must be automated.

  Neutron is not the only project that lacks of this documentation but
  it's a good candidate to start this initiative.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540526] [NEW] Too many lazy-loads in predictable situations

2016-02-01 Thread Dan Smith
Public bug reported:

During a normal tempest run, way (way) too many object lazy-loads are
being triggered, which causes extra RPC and database traffic. In a given
tempest run, we should be able to pretty much prevent any lazy-loads in
that predictable situation. The only case where we might want to have
some is where we are iterating objects and conditionally taking action
that needs to load extra information.

On a random devstack-tempest job run sampled on 1-Feb-2016, a lot of
lazy loads were seen:

  grep 'Lazy-loading' screen-n-cpu.txt.gz  -c
  624

We should be able to vastly reduce this number without much work.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540526

Title:
  Too many lazy-loads in predictable situations

Status in OpenStack Compute (nova):
  New

Bug description:
  During a normal tempest run, way (way) too many object lazy-loads are
  being triggered, which causes extra RPC and database traffic. In a
  given tempest run, we should be able to pretty much prevent any lazy-
  loads in that predictable situation. The only case where we might want
  to have some is where we are iterating objects and conditionally
  taking action that needs to load extra information.

  On a random devstack-tempest job run sampled on 1-Feb-2016, a lot of
  lazy loads were seen:

grep 'Lazy-loading' screen-n-cpu.txt.gz  -c
624

  We should be able to vastly reduce this number without much work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1540526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540512] [NEW] Host dependent IPAM

2016-02-01 Thread PSargent
Public bug reported:

Many organizations use complex IP address convention, which is managed
by spreadsheets or documents in order to track IP addresses and prevent
duplicate IP addresses. IP address conflicts is one of the most common
issues that can happen on an enterprise network. Currently, Neutron
provides a pluggable IP Address Management (IPAM) subsystem which allows
flexible control over the lifecycle of network resources.

In Calico where data is routed between compute hosts, it's desirable for
the IP addresses that are used on a given host (or rack) to be clustered
within a small IP prefix. In this case, the routes to those IP addresses
can be aggregated.  This RFE proposes pluggable IPAM support for host-
dependent IP address allocation.  This would allow the user to specify
that Neutron allocates the IP address of the VM based on the host.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540512

Title:
  Host dependent IPAM

Status in neutron:
  New

Bug description:
  Many organizations use complex IP address convention, which is managed
  by spreadsheets or documents in order to track IP addresses and
  prevent duplicate IP addresses. IP address conflicts is one of the
  most common issues that can happen on an enterprise network.
  Currently, Neutron provides a pluggable IP Address Management (IPAM)
  subsystem which allows flexible control over the lifecycle of network
  resources.

  In Calico where data is routed between compute hosts, it's desirable
  for the IP addresses that are used on a given host (or rack) to be
  clustered within a small IP prefix. In this case, the routes to those
  IP addresses can be aggregated.  This RFE proposes pluggable IPAM
  support for host-dependent IP address allocation.  This would allow
  the user to specify that Neutron allocates the IP address of the VM
  based on the host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540495] [NEW] Version 0.28 of xfvbwrapper breaks integration tests

2016-02-01 Thread Timur Sufiev
Public bug reported:

Version 0.28 (released on Jan 31) breaks all integration tests with
stacktrace

2016-02-01 16:58:10.961 | 2016-02-01 16:58:10.904 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 160, in setUp
2016-02-01 16:58:10.961 | 2016-02-01 16:58:10.905 | super(TestCase, 
self).setUp()
2016-02-01 16:58:10.961 | 2016-02-01 16:58:10.907 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 60, in setUp
2016-02-01 16:58:10.962 | 2016-02-01 16:58:10.909 | 
self.vdisplay.xvfb_cmd.append("-noreset")
2016-02-01 16:58:10.962 | 2016-02-01 16:58:10.910 | AttributeError: Xvfb 
instance has no attribute 'xvfb_cmd'

The solution is to adapt to new version by passing additional arguments
when initializing Xvfb instance (instead of appending them to xvfb_cmd
list which doesn't exist now before .start() is invoked).

** Affects: horizon
 Importance: Critical
 Assignee: Timur Sufiev (tsufiev-x)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Timur Sufiev (tsufiev-x)

** Changed in: horizon
   Importance: Undecided => Critical

** Changed in: horizon
Milestone: None => mitaka-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540495

Title:
  Version 0.28 of xfvbwrapper breaks integration tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Version 0.28 (released on Jan 31) breaks all integration tests with
  stacktrace

  2016-02-01 16:58:10.961 | 2016-02-01 16:58:10.904 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 160, in setUp
  2016-02-01 16:58:10.961 | 2016-02-01 16:58:10.905 | super(TestCase, 
self).setUp()
  2016-02-01 16:58:10.961 | 2016-02-01 16:58:10.907 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 60, in setUp
  2016-02-01 16:58:10.962 | 2016-02-01 16:58:10.909 | 
self.vdisplay.xvfb_cmd.append("-noreset")
  2016-02-01 16:58:10.962 | 2016-02-01 16:58:10.910 | AttributeError: Xvfb 
instance has no attribute 'xvfb_cmd'

  The solution is to adapt to new version by passing additional
  arguments when initializing Xvfb instance (instead of appending them
  to xvfb_cmd list which doesn't exist now before .start() is invoked).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540484] [NEW] Network topology should fit the screen

2016-02-01 Thread Ido Ovadia
Public bug reported:

Description of problem:
===
Network topology should fit the screen.
User can move the network topology beyond screen border

Version-Release number of selected component:
=
python-django-horizon-8.0.0-10.el7ost.noarch

How reproducible:
=
100%

Steps to Reproduce:
===
1. Install RHOS (test on Director and packstack)
2. Create a Network
3. Create an instance and connect it to the network you have created. 
4. Browse to network topology
5. Move the network to screen borders

Actual results:
===
Network topology could move beyond screen border, and the topology can disappear

Expected results:
=
Network topology could not move beyond screen border

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540484

Title:
   Network topology should fit the screen

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  ===
  Network topology should fit the screen.
  User can move the network topology beyond screen border

  Version-Release number of selected component:
  =
  python-django-horizon-8.0.0-10.el7ost.noarch

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Install RHOS (test on Director and packstack)
  2. Create a Network
  3. Create an instance and connect it to the network you have created. 
  4. Browse to network topology
  5. Move the network to screen borders

  Actual results:
  ===
  Network topology could move beyond screen border, and the topology can 
disappear

  Expected results:
  =
  Network topology could not move beyond screen border

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401057] Re: Direct mapping in mapping rules don't work with keywords

2016-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/175980
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c8682888bc5a9661771bf1017392156562b7084d
Submitter: Jenkins
Branch:master

commit c8682888bc5a9661771bf1017392156562b7084d
Author: Marek Denis 
Date:   Tue Apr 21 18:32:34 2015 +0200

Raise more precise exception on keyword mapping errors

Currently mapping rules cannot be built in a way where a remote
keyword such as 'any_one_of' or 'not_any_of' is used for direct
mapping ('{0}', '{1}' etc in local rules). However, there is also
no good way of informing user of faulty mapping rules. The original
code will raise an unhelpful IndexError.

This patch:

- introduces new exception class (resulting in HTTP 500 error code)
- changes the logic of mapping rules evaluation, that new exception
  class is being raised under appropriate circumstances.

Change-Id: I0f9e7a6949ff8bfbd147e2d3cac59fd5197934f1
Closes-Bug: #1401057


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1401057

Title:
  Direct mapping in mapping rules don't work with keywords

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Federation mapping engine doesn't work correctly when a rule to be directly 
mapped has special keywords (any_one_of or not_any_of). 
  For instance: 

  rules = [  
  {
  "local": [
  {
  "user": {
  "name": "{0}"
  }
  },
  {
  "group": {
  "id": "abc"
  }
  }
  ],
  "remote": [
  {
  "type": "openstack_user",
  "any_one_of": [
  "user1",
  "admin"
  ]
  }
  ]
  }
  ]

  user['name'] will not map "openstack_user" value as a keyword
  "any_one_of" is present in that remote rule. No validation error will
  be raised and direct value "{0}" will be used.

  Mapping engine should validate such cases and preferably allow for
  direct mapping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1401057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528676] Re: OpenLDAP password policy not enforced for password changes

2016-02-01 Thread Tristan Cacqueray
Agreed on class D, I closed the OSSA task, this could be re-opened
whenever the situation changes.

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1528676

Title:
  OpenLDAP password policy not enforced for password changes

Status in OpenStack Identity (keystone):
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Hello there,
  I'm on Ubuntu 14.04.3, Openstack Juno and OpenLDAP v2.4.31 releases.
  I configured OpenLDAP as an identity backend for Keystone and configured it 
according to official documentation from:
  http://docs.openstack.org/developer/keystone/configuration.html
  I'd like my users to be able to change their own passwords, but at the same 
time OpenLDAP password policy to be enforced upon password changes. I've set to 
true all allow_creates, allow_updates and allow_deletes not to be restricted in 
any way by keystone.
  The problem is the following: RootDN account is used for binding when the 
user is changing his/her password. OpenLDAP password policy is not enforced 
when RootDN performs the password change. As a result, no password policy is 
enforced during password change.
  If I don't set LDAP user/password in keystone.conf, then users cannot change 
their own passwords at all.
  Please recommend how I can allow the users to change their own passwords and 
at the same time enforce OpenLDAP password policy.
  Thank you,
  Nodir

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1528676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537754] Re: Unable to associate floating IP 172.24.5.4 to fixed IP 10.1.0.5 for instance ...

2016-02-01 Thread Dmitry Tantsur
** Changed in: ironic
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537754

Title:
  Unable to associate floating IP 172.24.5.4 to fixed IP 10.1.0.5 for
  instance ...

Status in Ironic:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  Gate seems to be failing because of:

  2016-01-25 12:27:22.139 ERROR nova.api.openstack.compute.floating_ips 
[req-ebc3999e-b054-400f-a85e-bc3446a49f44 tempest-BaremetalBasicOps-1761846040 
tempest-BaremetalBasicOps-1606508311] Unable to associate floating IP 
172.24.5.4 to fixed IP 10.1.0.5 for instance 
213b10ac-06eb-4ea0-b638-263f2b75a329. Error: Router 
a771a17f-0224-4293-b757-59fcc6d21410 could not be found
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips 
Traceback (most recent call last):
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 
238, in _add_floating_ip
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 fixed_address=fixed_address)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/opt/stack/new/nova/nova/network/base_api.py", line 77, in wrapper
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 res = f(self, context, *args, **kwargs)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 1203, in 
associate_floating_ip
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 client.update_floatingip(fip['id'], {'floatingip': param})
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 100, in with_params
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 ret = self.function(instance, *args, **kwargs)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 714, in update_floatingip
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 return self.put(self.floatingip_path % (floatingip), body=body)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 275, in put
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 headers=headers, params=params)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 243, in retry_request
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 headers=headers, params=params)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 206, in do_request
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 self._handle_fault_response(status_code, replybody)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 182, in _handle_fault_response
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 exception_handler_v20(status_code, des_error_body)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips   
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 69, in exception_handler_v20
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips
 status_code=status_code)
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips 
NotFound: Router a771a17f-0224-4293-b757-59fcc6d21410 could not be found
  2016-01-25 12:27:22.139 2247 ERROR nova.api.openstack.compute.floating_ips 

  http://logs.openstack.org/10/255310/10/check/gate-tempest-dsvm-ironic-
  pxe_ssh/67c7e80/logs/screen-n-api.txt.gz#_2016-01-25_12_27_22_139

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1537754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539029] Re: rollback_live_migration_at_destination fails in libvirt - expects migrate_data object, gets dictionary

2016-02-01 Thread Drew Thorstensen
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1539029

Title:
  rollback_live_migration_at_destination fails in libvirt - expects
  migrate_data object, gets dictionary

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The rollback_live_migration_at_destination in the libvirt nova driver
  is currently expecting the migrate_data as an object.  The object
  model for the migrate data was introduced here:

  
https://github.com/openstack/nova/commit/69e01758076d0e89eddfe6945c8c7e423c862a49

  Subsequently, a change set was added to provide transitional support
  for the migrate_data object.  This currently forces all of the
  migrate_data objects that are sent to the manager to be converted to
  dictionaries:

  
https://github.com/openstack/nova/commit/038dfd672f5b2be5ebe30d85bd00d09bae2993fc

  
  It looks like the rollback_live_migration_at_destination method still expects 
the migrate_data in object form.  However the manager passes it down as a 
dictionary.  Which results in this error message upon a rollback:

File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
204, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 373, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 5554, in 
rollback_live_migration_at_destination
  destroy_disks=destroy_disks, migrate_data=migrate_data)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6391, in 
rollback_live_migration_at_destination
  is_shared_instance_path = migrate_data.is_shared_instance_path
AttributeError: 'dict' object has no attribute 'is_shared_instance_path'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1539029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540411] [NEW] kilo: ValueError: git history requires a target version of pbr.version.SemanticVersion(2015.1.4), but target version is pbr.version.SemanticVersion(2015.1.3)

2016-02-01 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/57/266957/1/gate/gate-tempest-dsvm-neutron-
linuxbridge/ed15bbf/logs/devstacklog.txt.gz#_2016-01-31_05_55_23_989

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22error%20in%20setup%20command%3A%20Error%20parsing%5C%22%20AND%20message%3A%5C%22ValueError%3A%20git%20history%20requires%20a%20target%20version%20of%20pbr.version.SemanticVersion(2015.1.4)%2C%20but%20target%20version%20is%20pbr.version.SemanticVersion(2015.1.3)%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

This hits multiple projects, it's a known issue, this is just a bug for
tracking the failures in elastic-recheck.

** Affects: neutron
 Importance: Critical
 Assignee: Dave Walker (davewalker)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Dave Walker (davewalker)

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540411

Title:
  kilo: ValueError: git history requires a target version of
  pbr.version.SemanticVersion(2015.1.4), but target version is
  pbr.version.SemanticVersion(2015.1.3)

Status in neutron:
  In Progress

Bug description:
  http://logs.openstack.org/57/266957/1/gate/gate-tempest-dsvm-neutron-
  linuxbridge/ed15bbf/logs/devstacklog.txt.gz#_2016-01-31_05_55_23_989

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22error%20in%20setup%20command%3A%20Error%20parsing%5C%22%20AND%20message%3A%5C%22ValueError%3A%20git%20history%20requires%20a%20target%20version%20of%20pbr.version.SemanticVersion(2015.1.4)%2C%20but%20target%20version%20is%20pbr.version.SemanticVersion(2015.1.3)%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

  This hits multiple projects, it's a known issue, this is just a bug
  for tracking the failures in elastic-recheck.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458934] Re: Ironic: heavy polling

2016-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/221848
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=26c69a6a62a138ccdb88ecc5bf75330a04e5de0b
Submitter: Jenkins
Branch:master

commit 26c69a6a62a138ccdb88ecc5bf75330a04e5de0b
Author: Lucas Alvares Gomes 
Date:   Wed Jan 13 16:15:12 2016 +

Ironic: Lightweight fetching of nodes

The Ironic API version >= 1.8 supports fetching only a small number of
a resource instead of it's full representation. This patch is pinning
the API version for Ironic at 1.8 so the nova driver can benefit from
this feature by fetching only the required fields for the deployment,
making it more lightweight and improving the readability of the logs.

The configuration option "api_version" was marked as deprecated and
should be removed in the future. The only possible value for that
configuration was "1" (because Ironic only have 1 API version) and the
ironic team came to an agreement that setting the API version via
configuration option should not be supported anymore.

Closes-Bug: #1458934
Change-Id: Iebfa06da6811889e11fd4edf15d47ca4e0feea6f


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458934

Title:
  Ironic: heavy polling

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Ironic does poll the Ironic API to check the state of a Node when
  provisioning it, and currently we pool the whole node object
  representation every time. This can be very expensive since the node
  might contain a lot of details and even a base64 string of the
  configdrive registered in it. Plus, this full node representation also
  gets logged in case we use n-cpu with debug mode enabled.

  We need a light way to fetch the states from Nodes in Ironic that can
  be used by the Nova Ironic Driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1458934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470931] Re: Disable router advertisement for qbr devices

2016-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/198054
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5ab1b1b1c456b8b43edbd1bddd74b96b56ab80e6
Submitter: Jenkins
Branch:master

commit 5ab1b1b1c456b8b43edbd1bddd74b96b56ab80e6
Author: Adam Kacmarsky 
Date:   Thu Jul 2 10:13:16 2015 -0600

Disable IPv6 on bridge devices

The qbr bridge should not have any IPv6 addresses, either
link-local, or on the tenant's private network due to the
bridge processing Router Advertisements from Neutron and
auto-configuring addresses, since it will allow access to
the hypervisor from a tenant VM.

The bridge only exists to allow the Neutron security group
code to work with OVS, so we can safely disable IPv6 on it.

Closes-bug: 1470931
Partial-bug: 1302080

Change-Id: Ideecab1c21b240bcca71973ed74b0374afb20e5e


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470931

Title:
  Disable router advertisement for qbr devices

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In a multi-node Neutron DVR configuration, nova automatically
  configures the qbr bridge with an IPv6 address. There is no need for
  this to occur, as the bridge is only used by the Neutron security
  group code.

  qbrdac2483c-fc Link encap:Ethernet HWaddr d2:c8:37:c0:86:74 
  inet6 addr: fe80::c0f2:59ff:fe9e:e309/64 Scope:Link
  inet6 addr: fd42:197f:cf11:0:d0c8:37ff:fec0:8674/64 Scope:Global
  inet6 addr: fd42:197f:cf11:0:3504:7d5e:5274:41fb/64 Scope:Global
  UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1  
  RX packets:190 errors:0 dropped:0 overruns:0 frame:0
  TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:28856 (28.8 KB) TX bytes:9147 (9.1 KB)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540386] [NEW] Unexpected API Error

2016-02-01 Thread Ralet
Public bug reported:

I try to launch via the web-interface an ubuntu image generated with qemu.
The error message I received is the following:

Error: Unexpected API Error. Please report this at
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: req-ce828cb7
-6cca-45c1-9c3b-6703110e03a6)

I do not know where to locate the nova log. Sorry.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540386

Title:
  Unexpected  API Error

Status in OpenStack Compute (nova):
  New

Bug description:
  I try to launch via the web-interface an ubuntu image generated with qemu.
  The error message I received is the following:

  Error: Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP 500) (Request-ID: req-
  ce828cb7-6cca-45c1-9c3b-6703110e03a6)

  I do not know where to locate the nova log. Sorry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1540386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540342] Re: Duplicated code

2016-02-01 Thread Erno Kuvaja
The v1 code in deprecation plans and should not be refactored.

** Changed in: glance
   Importance: Undecided => Wishlist

** Changed in: glance
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1540342

Title:
  Duplicated code

Status in Glance:
  Won't Fix

Bug description:
  It's duplicated code in the code
  (glance/common/artifacts/definitions.py and glance/registry/api/v1/members.py)
  http://openqa.sed.hu/dashboard/index/137222?did=1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1540342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504725] Re: rabbitmq-server restart twice, log is crazy increasing until service restart

2016-02-01 Thread Mike Merinov
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504725

Title:
  rabbitmq-server restart twice, log is crazy increasing until service
  restart

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in oslo.messaging:
  Confirmed

Bug description:
  After I restart the rabbitmq-server for the second time, the service log(such 
as nova,neutron and so on) is increasing crazy, log is such as " TypeError: 
'NoneType' object has no attribute '__getitem__'".
  It seems that the channel is setted to None. 

  trace log:

  2015-10-10 15:20:59.413 29515 TRACE root Traceback (most recent call last):
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 95, in 
inner_func
  2015-10-10 15:20:59.413 29515 TRACE root return infunc(*args, **kwargs)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_executors/impl_eventlet.py", 
line 96, in _executor_thread
  2015-10-10 15:20:59.413 29515 TRACE root incoming = self.listener.poll()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
122, in poll
  2015-10-10 15:20:59.413 29515 TRACE root self.conn.consume(limit=1, 
timeout=timeout)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1202, in consume
  2015-10-10 15:20:59.413 29515 TRACE root six.next(it)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1100, in iterconsume
  2015-10-10 15:20:59.413 29515 TRACE root error_callback=_error_callback)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
868, in ensure
  2015-10-10 15:20:59.413 29515 TRACE root ret, channel = autoretry_method()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 458, in _ensured
  2015-10-10 15:20:59.413 29515 TRACE root return fun(*args, **kwargs)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 545, in __call__
  2015-10-10 15:20:59.413 29515 TRACE root self.revive(create_channel())
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 251, in channel
  2015-10-10 15:20:59.413 29515 TRACE root chan = 
self.transport.create_channel(self.connection)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 91, in 
create_channel
  2015-10-10 15:20:59.413 29515 TRACE root return connection.channel()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 289, in channel
  2015-10-10 15:20:59.413 29515 TRACE root return self.channels[channel_id]
  2015-10-10 15:20:59.413 29515 TRACE root TypeError: 'NoneType' object has no 
attribute '__getitem__'
  2015-10-10 15:20:59.413 29515 TRACE root

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540254] Re: "#flake8: noqa" is using incorrectly

2016-02-01 Thread Wang Bo
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540254

Title:
  "#flake8: noqa" is using incorrectly

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in python-heatclient:
  In Progress
Status in python-novaclient:
  New

Bug description:
  "# flake8: noqa" option disables all checks for the whole file. To
  disable one line we should use "# noqa".

  Refer to: https://pypi.python.org/pypi/flake8
  
https://github.com/openstack/python-keystoneclient/commit/3b766c51438396a0ab0032de309c9d56e275e0cb

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540342] [NEW] Duplicated code

2016-02-01 Thread Béla Vancsics
Public bug reported:

It's duplicated code in the code
(glance/common/artifacts/definitions.py and glance/registry/api/v1/members.py)
http://openqa.sed.hu/dashboard/index/137222?did=1

** Affects: glance
 Importance: Undecided
 Assignee: Béla Vancsics (vancsics)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Béla Vancsics (vancsics)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1540342

Title:
  Duplicated code

Status in Glance:
  In Progress

Bug description:
  It's duplicated code in the code
  (glance/common/artifacts/definitions.py and glance/registry/api/v1/members.py)
  http://openqa.sed.hu/dashboard/index/137222?did=1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1540342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540335] [NEW] impoetError:ml2 cannot be foune

2016-02-01 Thread syed
Public bug reported:

No handlers could be found for logger "neutron.common.legacy"
Traceback (most recent call last):
  File "/usr/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 
169, in main
CONF.command.func(config, CONF.command.name)
  File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 82, 
in do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 60, 
in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/lib/python2.6/site-packages/alembic/command.py", line 124, in 
upgrade
script.run_env()
  File "/usr/lib/python2.6/site-packages/alembic/script.py", line 191, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/lib/python2.6/site-packages/alembic/util.py", line 186, in 
load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File 
"/usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 42, in 
importutils.import_class(class_path)
  File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", 
line 33, in import_class
traceback.format_exception(*sys.exc_info(
ImportError: Class ml2 cannot be found (['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", 
line 28, in import_class\n__import__(mod_str)\n', 'ValueError: Empty module 
name\n'])

Even after trying all the links for troubleshooting the error i am getting the 
same.Pls help me out!!

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540335

Title:
  impoetError:ml2 cannot be foune

Status in neutron:
  New

Bug description:
  No handlers could be found for logger "neutron.common.legacy"
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 
169, in main
  CONF.command.func(config, CONF.command.name)
File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 
82, in do_upgrade_downgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 
60, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/alembic/command.py", line 124, in 
upgrade
  script.run_env()
File "/usr/lib/python2.6/site-packages/alembic/script.py", line 191, in 
run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.6/site-packages/alembic/util.py", line 186, in 
load_python_file
  module = imp.load_source(module_id, path, open(path, 'rb'))
File 
"/usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 42, in 
  importutils.import_class(class_path)
File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", 
line 33, in import_class
  traceback.format_exception(*sys.exc_info(
  ImportError: Class ml2 cannot be found (['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", 
line 28, in import_class\n__import__(mod_str)\n', 'ValueError: Empty module 
name\n'])
  

  Even after trying all the links for troubleshooting the error i am getting 
the same.Pls help me out!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522008] Re: Nova delete instance fails if cleaning is not async

2016-02-01 Thread Yuriy Zveryanskyy
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522008

Title:
  Nova delete instance fails if cleaning is not async

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  When node cleaning is not async (conductor starts cleaning immediately
  in do_node_tear_down()) Nova delete instance fails. Nova tries to
  update Ironic port (remove vif id), but conductor releases a lock only
  after thread with cleaning completed. Because cleaning usually is not
  a short time action Nova fails after retries. This bug can affects any
  out-of-tree driver, reproduced on Ansible deploy driver PoC (
  https://review.openstack.org/#/c/238183/ ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1522008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540271] [NEW] ha interface cleanup can leave orphans

2016-02-01 Thread Kevin Benton
Public bug reported:

The HA delete_router routine may leave behind stranded interfaces if it
fails to delete the HA ports because it deletes the router object before
removing the ports. This failure could be an issue in the loaded ML2
drivers, or it could simply be that the neutron server process was
killed before completing the deletion.

These orphaned interfaces are then difficult to delete via the API due
to their device_owner field.

https://github.com/openstack/neutron/blob/6bcacc114281b1017acae7e436a56b2da09f69f1/neutron/db/l3_hamode_db.py#L504-L512

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540271

Title:
  ha interface cleanup can leave orphans

Status in neutron:
  New

Bug description:
  The HA delete_router routine may leave behind stranded interfaces if
  it fails to delete the HA ports because it deletes the router object
  before removing the ports. This failure could be an issue in the
  loaded ML2 drivers, or it could simply be that the neutron server
  process was killed before completing the deletion.

  These orphaned interfaces are then difficult to delete via the API due
  to their device_owner field.

  
https://github.com/openstack/neutron/blob/6bcacc114281b1017acae7e436a56b2da09f69f1/neutron/db/l3_hamode_db.py#L504-L512

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540267] [NEW] UpdateMetada action doesn't work right after the image creation

2016-02-01 Thread Yury Tregubov
Public bug reported:

In 9.0.0. version deployed with devstack the following behaviour was found.
After image creation by any user: Project -> Compute -> Images -> Create Image
The action 'Update Metadata' doesn’t work. Nothing happens after the click, the 
form doesn't appear.
However after the page refresh the 'Update Metadata' action works fine.
And the rest of the actions (Edit Image, Delete, Create Volume, Launch 
Instance) work fine right after the creation.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540267

Title:
  UpdateMetada action doesn't work right after the image creation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In 9.0.0. version deployed with devstack the following behaviour was found.
  After image creation by any user: Project -> Compute -> Images -> Create Image
  The action 'Update Metadata' doesn’t work. Nothing happens after the click, 
the form doesn't appear.
  However after the page refresh the 'Update Metadata' action works fine.
  And the rest of the actions (Edit Image, Delete, Create Volume, Launch 
Instance) work fine right after the creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516226] Re: Keystone V2 User API can access users outside of the default domain

2016-02-01 Thread Morgan Fainberg
This is the same basic issue as the behavior where you can auth with a
user outside the default domain if you know the id.

This is not something we can "fix" or "correct" without breaking past
behavior... deprecation and finally removal of V2 will be the solution
here.

** Changed in: keystone
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1516226

Title:
  Keystone V2 User API can access users outside of the default domain

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  The Keystone V2 API is not meant to be able to "see" any user, groups
  or projects outside of the default domain.  APIs that list these
  entities are careful to filter out any that are in non-default-
  domains.  However, if you know your entity ID we don't prevent you
  from doing direct lookup -  i.e.. Get /users/ will work via
  the V2 API even if the user is out side of the default domain.  The
  same is true for projects.  Since the V2 API does not have the concept
  of groups, there is no issue in that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1516226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540259] [NEW] uselist should be True to DVRPortBinding orm.relationship

2016-02-01 Thread ZongKai LI
Public bug reported:

In DVR scenario, after a router interface has been bound to multiple hosts, 
when we remove this interface from router, in neutron server log, SQL warning 
will raise:
  SAWarning: Multiple rows returned with uselist=False for eagerly-loaded 
attribute 'Port.dvr_port_binding'

it's caused by
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/models.py#L130,
uselist is set to False. But indeed, table ml2_dvr_port_bindings stores
all bindings for router_interface_distributed ports, and for a that kind
of port, it could have multiple bindings. So it's not a one-to-one
relationship, we should remove "uselist=False" in DVRPortBinding port
orm.relationship.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540259

Title:
  uselist should be True to DVRPortBinding orm.relationship

Status in neutron:
  In Progress

Bug description:
  In DVR scenario, after a router interface has been bound to multiple hosts, 
when we remove this interface from router, in neutron server log, SQL warning 
will raise:
SAWarning: Multiple rows returned with uselist=False for eagerly-loaded 
attribute 'Port.dvr_port_binding'

  it's caused by
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/models.py#L130,
  uselist is set to False. But indeed, table ml2_dvr_port_bindings
  stores all bindings for router_interface_distributed ports, and for a
  that kind of port, it could have multiple bindings. So it's not a one-
  to-one relationship, we should remove "uselist=False" in
  DVRPortBinding port orm.relationship.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540254] Re: "#flake8: noqa" is using incorrectly

2016-02-01 Thread Wang Bo
** Also affects: python-heatclient
   Importance: Undecided
   Status: New

** Changed in: python-heatclient
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540254

Title:
  "#flake8: noqa" is using incorrectly

Status in OpenStack Dashboard (Horizon):
  New
Status in python-heatclient:
  New

Bug description:
  "# flake8: noqa" option disables all checks for the whole file. To
  disable one line we should use "# noqa".

  Refer to: https://pypi.python.org/pypi/flake8
  
https://github.com/openstack/python-keystoneclient/commit/3b766c51438396a0ab0032de309c9d56e275e0cb

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540254] [NEW] "#flake8: noqa" is using incorrectly

2016-02-01 Thread Wang Bo
Public bug reported:

"# flake8: noqa" option disables all checks for the whole file. To
disable one line we should use "# noqa".

Refer to: https://pypi.python.org/pypi/flake8
https://github.com/openstack/python-keystoneclient/commit/3b766c51438396a0ab0032de309c9d56e275e0cb

** Affects: horizon
 Importance: Undecided
 Assignee: Wang Bo (chestack)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540254

Title:
  "#flake8: noqa" is using incorrectly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  "# flake8: noqa" option disables all checks for the whole file. To
  disable one line we should use "# noqa".

  Refer to: https://pypi.python.org/pypi/flake8
  
https://github.com/openstack/python-keystoneclient/commit/3b766c51438396a0ab0032de309c9d56e275e0cb

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp