[Yahoo-eng-team] [Bug 1278203] Re: live migration attempts block device and fs resize

2014-03-12 Thread Michael Still
I think this bug is invalid. The code path looks to me like we're
resizing the backing file for the disk that we fetched from glance. For
better or for worse, when we know that the filesystem inside that
backing file is an ext2 filesystem we also resize the file system after
resizing the disk. Presumably this machine didn't have libguestfs, so it
fell back to nbd.

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: nova
   Importance: High => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278203

Title:
  live migration attempts block device and fs resize

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I noticed this when some qemu-nbd processes were hung, and we had file
  injection off. I was like WAT.

  Here is a backtrace (I added an exception in the nbd code to find out what 
was calling it):
  Traceback (most recent call last):
    File "/lib/python2.7/site-packages/oslo/messaging/_executors/base.py", line 
36, in _dispatch
  incoming.reply(self.callback(incoming.ctxt, incoming.message))
    File "/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in __call__
  return self._dispatch(endpoint, method, ctxt, args)
    File "/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
92, in _dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
    File "/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  payload)
    File "/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 
68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
    File "/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  return f(self, context, *args, **kw)
    File "/lib/python2.7/site-packages/nova/compute/manager.py", line 266, in 
decorated_function
  e, sys.exc_info())
    File "/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 
68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
    File "/lib/python2.7/site-packages/nova/compute/manager.py", line 253, in 
decorated_function
  return function(self, context, *args, **kwargs)
    File "/lib/python2.7/site-packages/nova/compute/manager.py", line 4169, in 
pre_live_migration
  migrate_data)
    File "/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4349, 
in pre_live_migration
  disk_info)
    File "/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4446, 
in _create_images_and_backing
  size=info['virt_disk_size'])
    File "/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 
180, in cache
  *args, **kwargs)
    File "/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 
330, in create_image
  copy_qcow2_image(base, self.path, size)
    File "/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", 
line 249, in inner
  return f(*args, **kwargs)
    File "/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 
296, in copy_qcow2_image
  disk.extend(target, size, use_cow=True)
    File "/lib/python2.7/site-packages/nova/virt/disk/api.py", line 155, in 
extend
  if not is_image_partitionless(image, use_cow):
    File "/lib/python2.7/site-packages/nova/virt/disk/api.py", line 205, in 
is_image_partitionless
  fs.setup()
    File "/lib/python2.7/site-packages/nova/virt/disk/vfs/localfs.py", line 82, 
in setup
  self.teardown()
    File "/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 
68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
    File "/lib/python2.7/site-packages/nova/virt/disk/vfs/localfs.py", line 76, 
in setup
  if not mount.do_mount():
    File "/lib/python2.7/site-packages/nova/virt/disk/mount/api.py", line 218, 
in do_mount
  status = self.get_dev() and self.map_dev() and self.mnt_dev()
    File "/lib/python2.7/site-packages/nova/virt/disk/mount/nbd.py", line 127, 
in get_dev
  return self._get_dev_retry_helper()
    File "/lib/python2.7/site-packages/nova/virt/disk/mount/api.py", line 118, 
in _get_dev_retry_helper
  device = self._inner_get_dev()
    File "/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", 
line 249, in inner
  return f(*args, **kwargs)
    File "/lib/python2.7/site-packages/nova/virt/disk/mount/nbd.py", line 86, 
in _inner_get_dev
  device = self._allocate_nbd()
    File "/lib/python2.7/site-packages/nova/virt/disk/mount/nbd.py", line 63, 
in _allocate_nbd
  raise Exception("FOAD")
  Exception: FOAD

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.

[Yahoo-eng-team] [Bug 1291805] [NEW] Don't change list to tuple when get info from libvirt

2014-03-12 Thread Shuangtai Tian
Public bug reported:

In the libvirt.driver, we now use the code like this:
(state, _max_mem, _mem, _cpus, _t) = virt_dom.info()

if the libvirt add new variables in the domain info, the code will be failed.
the error will like this :
 File "/opt/stack/nova/nova/service.py", line 180, in start
self.manager.init_host()
  File "/opt/stack/nova/nova/compute/manager.py", line 974, in init_host
self._init_instance(context, instance)
  File "/opt/stack/nova/nova/compute/manager.py", line 882, in _init_instance
drv_state = self._get_power_state(context, instance)
  File "/opt/stack/nova/nova/compute/manager.py", line 990, in _get_power_state
return self.driver.get_info(instance)["state"]
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3462, in get_info
(state, max_mem, mem, num_cpu, cpu_time) = virt_dom.info()
ValueError: too many values to unpack

** Affects: ceilometer
 Importance: Undecided
 Assignee: Shuangtai Tian (shuangtai-tian)
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Shuangtai Tian (shuangtai-tian)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Shuangtai Tian (shuangtai-tian)

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
 Assignee: (unassigned) => Shuangtai Tian (shuangtai-tian)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291805

Title:
  Don't change list to tuple when get info from libvirt

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  In the libvirt.driver, we now use the code like this:
  (state, _max_mem, _mem, _cpus, _t) = virt_dom.info()

  if the libvirt add new variables in the domain info, the code will be failed.
  the error will like this :
   File "/opt/stack/nova/nova/service.py", line 180, in start
  self.manager.init_host()
File "/opt/stack/nova/nova/compute/manager.py", line 974, in init_host
  self._init_instance(context, instance)
File "/opt/stack/nova/nova/compute/manager.py", line 882, in _init_instance
  drv_state = self._get_power_state(context, instance)
File "/opt/stack/nova/nova/compute/manager.py", line 990, in 
_get_power_state
  return self.driver.get_info(instance)["state"]
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3462, in get_info
  (state, max_mem, mem, num_cpu, cpu_time) = virt_dom.info()
  ValueError: too many values to unpack

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1291805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291791] [NEW] nova-manage agent create should do param check

2014-03-12 Thread jichencom
Public bug reported:

[root@xxx ~]# nova-manage agent create --os linux --architecture x86
--version 1.0 --url a...@sina.com --md5hash
abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc

/usr/lib64/python2.6/site-packages/sqlalchemy/engine/default.py:331: Warning: 
Data truncated for column 'md5hash' at row 1
  cursor.execute(statement, parameters)

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichencom (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291791

Title:
  nova-manage agent create should do param check

Status in OpenStack Compute (Nova):
  New

Bug description:
  [root@xxx ~]# nova-manage agent create --os linux --architecture
  x86 --version 1.0 --url a...@sina.com --md5hash
  
abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc

  /usr/lib64/python2.6/site-packages/sqlalchemy/engine/default.py:331: Warning: 
Data truncated for column 'md5hash' at row 1
cursor.execute(statement, parameters)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291174] Re: Calling patch.stop not needed in individual test cases

2014-03-12 Thread Henry Gessau
Even though addCleanup(mock.patch.stopall) is in the base test class, I
find that removing some of the existing individual stops causes test
failures during tox full suite runs. Abandoning this change since major
refactoring of neutron unit tests is planned.

** Changed in: neutron
   Status: In Progress => New

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
 Assignee: Henry Gessau (gessau) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291174

Title:
  Calling patch.stop not needed in individual test cases

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  https://bugs.launchpad.net/neutron/+bug/1290550 adds mock.patch.stopall to 
BaseTestCase.
  https://bugs.launchpad.net/neutron/+bug/1291130 removes stopall from 
individual test cases.

  We can now go one step further, and remove individual patch stops from
  cleanups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290234] Re: do not use __builtin__ in Python3

2014-03-12 Thread lvdongbing
** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290234

Title:
  do not use __builtin__ in Python3

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Trove - Database as a Service:
  New
Status in Tuskar:
  Fix Committed

Bug description:
  __builtin__ does not exist in Python 3, use six.moves.builtins
  instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291730] Re: hyper-V: resize failed

2014-03-12 Thread Jay Lau
My bad, no need to resize vhd if the size was not changed.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291730

Title:
  hyper-V: resize failed

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When I resize a VM with hyperv-V to different host, the nova compute
  on target host will exit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291741] [NEW] VMWare: Resize action does not change disk

2014-03-12 Thread Feng Xi Yan
Public bug reported:

In "nova/virt/vmwareapi/vmops.py"

def finish_migration(self, context, migration, instance, disk_info,
 network_info, image_meta, resize_instance=False,
 block_device_info=None, power_on=True):
"""Completes a resize, turning on the migrated instance."""
if resize_instance:
client_factory = self._session._get_vim().client.factory
vm_ref = vm_util.get_vm_ref(self._session, instance)
vm_resize_spec = vm_util.get_vm_resize_spec(client_factory,
instance)
reconfig_task = self._session._call_method(
self._session._get_vim(),
"ReconfigVM_Task", vm_ref,
spec=vm_resize_spec)
.

finish_migration uses vm_util.get_vm_resize_spec() to get resize
parameters.

But in "nova/virt/vmwareapi/vm_util.py"

def get_vm_resize_spec(client_factory, instance):
"""Provides updates for a VM spec."""
resize_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
resize_spec.numCPUs = int(instance['vcpus'])
resize_spec.memoryMB = int(instance['memory_mb'])
return resize_spec

the get_vm_resize_spec action does not set up disk size to resize.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: resize vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291741

Title:
  VMWare: Resize action does not change disk

Status in OpenStack Compute (Nova):
  New

Bug description:
  In "nova/virt/vmwareapi/vmops.py"

  def finish_migration(self, context, migration, instance, disk_info,
   network_info, image_meta, resize_instance=False,
   block_device_info=None, power_on=True):
  """Completes a resize, turning on the migrated instance."""
  if resize_instance:
  client_factory = self._session._get_vim().client.factory
  vm_ref = vm_util.get_vm_ref(self._session, instance)
  vm_resize_spec = vm_util.get_vm_resize_spec(client_factory,
  instance)
  reconfig_task = self._session._call_method(
  self._session._get_vim(),
  "ReconfigVM_Task", vm_ref,
  spec=vm_resize_spec)
  .

  finish_migration uses vm_util.get_vm_resize_spec() to get resize
  parameters.

  But in "nova/virt/vmwareapi/vm_util.py"

  def get_vm_resize_spec(client_factory, instance):
  """Provides updates for a VM spec."""
  resize_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
  resize_spec.numCPUs = int(instance['vcpus'])
  resize_spec.memoryMB = int(instance['memory_mb'])
  return resize_spec

  the get_vm_resize_spec action does not set up disk size to resize.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291730] [NEW] hyper-V: resize failed

2014-03-12 Thread Jay Lau
Public bug reported:

When I resize a VM with hyperv-V to different host, the nova compute on
target host will exit.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291730

Title:
  hyper-V: resize failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I resize a VM with hyperv-V to different host, the nova compute
  on target host will exit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290234] Re: do not use __builtin__ in Python3

2014-03-12 Thread Xurong Yang
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Xurong Yang (idopra)

** Changed in: glance
   Status: New => In Progress

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Xurong Yang (idopra)

** Changed in: cinder
   Status: New => In Progress

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
 Assignee: (unassigned) => Xurong Yang (idopra)

** Changed in: heat
   Status: New => In Progress

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Xurong Yang (idopra)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1290234

Title:
  do not use __builtin__ in Python3

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Tuskar:
  Fix Committed

Bug description:
  __builtin__ does not exist in Python 3, use six.moves.builtins
  instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291726] [NEW] The error message can't be returned to client when HTTPUnprocessableEntity happens

2014-03-12 Thread Haiwei Xu
Public bug reported:

When HTTPUnprocessableEntity happens, error messages are not passed to
the parameter, so the client can't receive the details of the exception,
but just gets the default error message defined in the
HTTPUnprocessableEntity  class.

 873 class HTTPUnprocessableEntity(HTTPClientError):
 874 """
 875 subclass of :class:`~HTTPClientError`
 876
 877 This indicates that the server is unable to process the contained
 878 instructions. Only for WebDAV.
 879
 880 code: 422, title: Unprocessable Entity
 881 """
 882 ## Note: from WebDAV
 883 code = 422
 884 title = 'Unprocessable Entity'
 885 explanation = 'Unable to process the contained instructions'

That's not good, the error messages should be returned to the client.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291726

Title:
  The error message can't be returned to client when
  HTTPUnprocessableEntity happens

Status in OpenStack Compute (Nova):
  New

Bug description:
  When HTTPUnprocessableEntity happens, error messages are not passed to
  the parameter, so the client can't receive the details of the
  exception, but just gets the default error message defined in the
  HTTPUnprocessableEntity  class.

   873 class HTTPUnprocessableEntity(HTTPClientError):
   874 """
   875 subclass of :class:`~HTTPClientError`
   876
   877 This indicates that the server is unable to process the contained
   878 instructions. Only for WebDAV.
   879
   880 code: 422, title: Unprocessable Entity
   881 """
   882 ## Note: from WebDAV
   883 code = 422
   884 title = 'Unprocessable Entity'
   885 explanation = 'Unable to process the contained instructions'

  That's not good, the error messages should be returned to the client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291705] [NEW] Oauth manager created multiple times

2014-03-12 Thread Brant Knudson
Public bug reported:

The oauth1.Manager class, which is @dependency.provider('oauth_api'), is
created multiple times when it should only be created once. A 'provider'
should only be created once because otherwise the new instance replaces
the one that's stored in the dependency map. Luckily, the oauth1.Manager
doesn't store any state so it's safe to do in this case, but it makes
for a bad example that others are copying and it might not be safe in
those cases.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291705

Title:
  Oauth manager created multiple times

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The oauth1.Manager class, which is @dependency.provider('oauth_api'),
  is created multiple times when it should only be created once. A
  'provider' should only be created once because otherwise the new
  instance replaces the one that's stored in the dependency map.
  Luckily, the oauth1.Manager doesn't store any state so it's safe to do
  in this case, but it makes for a bad example that others are copying
  and it might not be safe in those cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280494] Re: VMware datastore backend should support storage policies

2014-03-12 Thread Arnaud Legendre
Marking this bug as invalid. It fits more in a blueprint.

** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1280494

Title:
  VMware datastore backend should support storage policies

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  The VMware storage backend should be able to place the images based on 
storage policies.
  Currently, it is only possible to select one datastore to place the images.
  The user can provide a storage policy defined in vCenter server instead of 
specifying directly a datastore.

  Among the subset of datastores matching the policy, the datastore with
  most free space and accessible will be selected.

  If the policy provided is not found in VC or no matching datastore if
  found, the datastore specified in the configuration will be used.

  SPBM should be used with vCenter server 5.5+.

  More information about SPBM:
  
http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc%2FGUID-C8E919D0-9D80-4AE1-826B-D180632775F3.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1280494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291695] [NEW] BigSwitch: should call eventlet sleep in watchdog

2014-03-12 Thread Kevin Benton
Public bug reported:

The consistency watchdog in eventlet currently calls time.sleep which
will block other greenthreads who are members of the same pool.

https://github.com/openstack/neutron/blob/288e3127440158f177beaae1972236def4916251/neutron/plugins/bigswitch/servermanager.py#L554

It should use eventlet.sleep so it yields to other members of the same
pool.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress


** Tags: bigswitch

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Tags added: bigswitch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291695

Title:
  BigSwitch: should call eventlet sleep in watchdog

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The consistency watchdog in eventlet currently calls time.sleep which
  will block other greenthreads who are members of the same pool.

  
https://github.com/openstack/neutron/blob/288e3127440158f177beaae1972236def4916251/neutron/plugins/bigswitch/servermanager.py#L554

  It should use eventlet.sleep so it yields to other members of the same
  pool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291690] [NEW] delete router interface fail if neutron and nvp out of sync

2014-03-12 Thread Bhuvan Arumugam
Public bug reported:

It's similar to https://bugs.launchpad.net/neutron/+bug/1251422, but wrt 
routers.
If we delete a router from neutron that is already deleted in nvp, it throw 404 
error. The correct behavior should be to delete it from neutron, if it's 
already deleted in nvp.

rainbow:~ bhuvan$ neutron router-interface-delete tempest-router 
67056b2d-a924-4456-9050-ed0baa0eaf1a
404-{u'NeutronError': {u'message': u'Router 
d6f3c0c6-6884-467f-9a84-5a64b88f8936 has no interface on subnet 
67056b2d-a924-4456-9050-ed0baa0eaf1a', u'type': u'RouterInterfaceNotFoundForS
ubnet', u'detail': u''}}

neutron server log. Note: 404 error from nvp  is logged at INFO level.
It should be a WARNING.

2014-03-12 22:42:26,149 (keystoneclient.middleware.auth_token): DEBUG 
auth_token _build_user_headers Received request from user: tempest-admin with 
project_id : csi-tenant-tempest and ro
les: csi-tenant-admin,csi-role-admin 
2014-03-12 22:42:26,151 (routes.middleware): DEBUG middleware __call__ Matched 
PUT /routers/d6f3c0c6-6884-467f-9a84-5a64b88f8936/remove_router_interface.json
2014-03-12 22:42:26,151 (routes.middleware): DEBUG middleware __call__ Route 
path: '/routers/:(id)/remove_router_interface.:(format)', defaults: {'action': 
u'remove_router_interface', 'c
ontroller': >}
2014-03-12 22:42:26,151 (routes.middleware): DEBUG middleware __call__ Match 
dict: {'action': u'remove_router_interface', 'controller': >, 'id': u'd6f3c0c6-6884-467f-9a84-5a64b88f8936', 'format': u'json'}
2014-03-12 22:42:26,208 (neutron.api.v2.resource): INFO resource resource 
remove_router_interface failed (client error): Router 
d6f3c0c6-6884-467f-9a84-5a64b88f8936 has no interface on s
ubnet 67056b2d-a924-4456-9050-ed0baa0eaf1a
2014-03-12 22:42:26,210 (neutron.wsgi): INFO log write 17.199.81.86 - - 
[12/Mar/2014 22:42:26] "PUT 
/v2.0/routers/d6f3c0c6-6884-467f-9a84-5a64b88f8936/remove_router_interface.json 
HTTP/1
.1" 404 329 0.066915

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: nicira

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291690

Title:
  delete router interface fail if neutron and nvp out of sync

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It's similar to https://bugs.launchpad.net/neutron/+bug/1251422, but wrt 
routers.
  If we delete a router from neutron that is already deleted in nvp, it throw 
404 error. The correct behavior should be to delete it from neutron, if it's 
already deleted in nvp.

  rainbow:~ bhuvan$ neutron router-interface-delete tempest-router 
67056b2d-a924-4456-9050-ed0baa0eaf1a
  404-{u'NeutronError': {u'message': u'Router 
d6f3c0c6-6884-467f-9a84-5a64b88f8936 has no interface on subnet 
67056b2d-a924-4456-9050-ed0baa0eaf1a', u'type': u'RouterInterfaceNotFoundForS
  ubnet', u'detail': u''}}

  neutron server log. Note: 404 error from nvp  is logged at INFO level.
  It should be a WARNING.

  2014-03-12 22:42:26,149 (keystoneclient.middleware.auth_token): DEBUG 
auth_token _build_user_headers Received request from user: tempest-admin with 
project_id : csi-tenant-tempest and ro
  les: csi-tenant-admin,csi-role-admin 
  2014-03-12 22:42:26,151 (routes.middleware): DEBUG middleware __call__ 
Matched PUT 
/routers/d6f3c0c6-6884-467f-9a84-5a64b88f8936/remove_router_interface.json
  2014-03-12 22:42:26,151 (routes.middleware): DEBUG middleware __call__ Route 
path: '/routers/:(id)/remove_router_interface.:(format)', defaults: {'action': 
u'remove_router_interface', 'c
  ontroller': >}
  2014-03-12 22:42:26,151 (routes.middleware): DEBUG middleware __call__ Match 
dict: {'action': u'remove_router_interface', 'controller': >, 'id': u'd6f3c0c6-6884-467f-9a84-5a64b88f8936', 'format': 
u'json'}
  2014-03-12 22:42:26,208 (neutron.api.v2.resource): INFO resource resource 
remove_router_interface failed (client error): Router 
d6f3c0c6-6884-467f-9a84-5a64b88f8936 has no interface on s
  ubnet 67056b2d-a924-4456-9050-ed0baa0eaf1a
  2014-03-12 22:42:26,210 (neutron.wsgi): INFO log write 17.199.81.86 - - 
[12/Mar/2014 22:42:26] "PUT 
/v2.0/routers/d6f3c0c6-6884-467f-9a84-5a64b88f8936/remove_router_interface.json 
HTTP/1
  .1" 404 329 0.066915

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291684] [NEW] After deleting a volume snapshot, user is automatically switched over to Volume tab

2014-03-12 Thread mariam john
Public bug reported:

Click Project -> Volumes. Create a new volume and create a snapshot from
it. This will take you to the snapshot view. Now from this view click on
the 'Delete Volume Snapshot' link and it will automatically switch the
user back to the Volumes view. You can also create multiple snapshots
and test this. This behavior happens when we try to delete one snapshot
but after that deleting the remaining ones keeps us in the snapshot view
itself. Would expect to stay in the snapshot view after deleting a
snapshot

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291684

Title:
  After deleting a volume snapshot, user is automatically switched over
  to Volume tab

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Click Project -> Volumes. Create a new volume and create a snapshot
  from it. This will take you to the snapshot view. Now from this view
  click on the 'Delete Volume Snapshot' link and it will automatically
  switch the user back to the Volumes view. You can also create multiple
  snapshots and test this. This behavior happens when we try to delete
  one snapshot but after that deleting the remaining ones keeps us in
  the snapshot view itself. Would expect to stay in the snapshot view
  after deleting a snapshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291681] [NEW] Success message shown when deleting a volume /volume snapshot

2014-03-12 Thread mariam john
Public bug reported:

Click on Project -> Volumes. Create a new volume and volume snapshot.
Click on the Delete Volume/Delete Snapshot button and it will display
the following message:

Success: Schedule deletion of volume/volume snapshot {name}

Since the deletion is not complete at that point, isnt it appropriate to
show this as an 'Info' message instead of 'Success' (like how it shows
when a volume/volume snapshot is created)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291681

Title:
  Success message shown when deleting a volume /volume snapshot

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Click on Project -> Volumes. Create a new volume and volume snapshot.
  Click on the Delete Volume/Delete Snapshot button and it will display
  the following message:

  Success: Schedule deletion of volume/volume snapshot {name}

  Since the deletion is not complete at that point, isnt it appropriate
  to show this as an 'Info' message instead of 'Success' (like how it
  shows when a volume/volume snapshot is created)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291676] [NEW] Able to create multiple volumes and volume snapshots with the same name

2014-03-12 Thread mariam john
Public bug reported:

Under Project -> Volumes, create a new volume, say 'vol1'.  Now create
one or more snapshots from this volume and name them the same. Then
create another new volume using the same name 'vol1' and create
snapshots with the same name.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291676

Title:
  Able to create multiple volumes and volume snapshots with the same
  name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Under Project -> Volumes, create a new volume, say 'vol1'.  Now create
  one or more snapshots from this volume and name them the same. Then
  create another new volume using the same name 'vol1' and create
  snapshots with the same name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291677] [NEW] delete interface times out in check-tempest-dsvm-neutron

2014-03-12 Thread Brant Knudson
Public bug reported:


This occurred during a tempest run on https://review.openstack.org/#/c/78459/ 
(in Keystone). check-tempest-dsvm-neutron failed with a single test failure: 
tempest.api.compute.v3.servers.test_attach_interfaces.AttachInterfacesV3Test.test_create_list_show_delete_interfaces

Looks like the test gets a token, starts booting an instance:

POST http://127.0.0.1:5000/v2.0/tokens - Status: 200
POST http://127.0.0.1:8774/v3/servers - Status: 202

... this is all probably expected... eventually it does a DELETE:

2014-03-12 20:59:32,797 
Request: DELETE 
http://127.0.0.1:8774/v3/servers/f4433c4f-4d27-492d-8794-77674a634c3f/os-attach-interfaces/f145579d-aaa5-4d44-941c-9856500f65b5
Response Status: 202

Then it starts requesting status... and eventually it gives up:

2014-03-12 21:02:48,016
GET 
http://127.0.0.1:8774/v3/servers/f4433c4f-4d27-492d-8794-77674a634c3f/os-attach-interfaces
Response Status: 200
Response Body: ... "port_state": "ACTIVE",  ... "port_state": "ACTIVE" ... 
"port_state": "ACTIVE"

The test failed in _test_delete_interface:

File "tempest/api/compute/v3/servers/test_attach_interfaces.py", line 127, in 
test_create_list_show_delete_interfaces
File "tempest/api/compute/v3/servers/test_attach_interfaces.py", line 96, in 
_test_delete_interface
Details: Failed to delete interface within the required time: 196 sec.

The timings at the end show the slowpoke:

tempest.api.compute.v3.servers.test_attach_interfaces.AttachInterfacesV3Test.test_create_list_show_delete_interfaces[gate,smoke]
218.999



Neutron's last entry for the instance in q-svc.txt is

[12/Mar/2014 21:02:52] "GET /v2.0/ports.json?device_id=f4433c4f-
4d27-492d-8794-77674a634c3f HTTP/1.1" 200 1912 0.027403

I grepped through the q-svc.txt log file and I don't see anyplace where
the DELETE is forwarded on.



Did the test not wait long enough for the operation to complete? Was the
shutdown request ignored?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291677

Title:
  delete interface times out in check-tempest-dsvm-neutron

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  This occurred during a tempest run on https://review.openstack.org/#/c/78459/ 
(in Keystone). check-tempest-dsvm-neutron failed with a single test failure: 
tempest.api.compute.v3.servers.test_attach_interfaces.AttachInterfacesV3Test.test_create_list_show_delete_interfaces

  Looks like the test gets a token, starts booting an instance:

  POST http://127.0.0.1:5000/v2.0/tokens - Status: 200
  POST http://127.0.0.1:8774/v3/servers - Status: 202

  ... this is all probably expected... eventually it does a DELETE:

  2014-03-12 20:59:32,797 
  Request: DELETE 
http://127.0.0.1:8774/v3/servers/f4433c4f-4d27-492d-8794-77674a634c3f/os-attach-interfaces/f145579d-aaa5-4d44-941c-9856500f65b5
  Response Status: 202

  Then it starts requesting status... and eventually it gives up:

  2014-03-12 21:02:48,016
  GET 
http://127.0.0.1:8774/v3/servers/f4433c4f-4d27-492d-8794-77674a634c3f/os-attach-interfaces
  Response Status: 200
  Response Body: ... "port_state": "ACTIVE",  ... "port_state": "ACTIVE" ... 
"port_state": "ACTIVE"

  The test failed in _test_delete_interface:

  File "tempest/api/compute/v3/servers/test_attach_interfaces.py", line 127, in 
test_create_list_show_delete_interfaces
  File "tempest/api/compute/v3/servers/test_attach_interfaces.py", line 96, in 
_test_delete_interface
  Details: Failed to delete interface within the required time: 196 sec.

  The timings at the end show the slowpoke:

  
tempest.api.compute.v3.servers.test_attach_interfaces.AttachInterfacesV3Test.test_create_list_show_delete_interfaces[gate,smoke]
  218.999

  

  Neutron's last entry for the instance in q-svc.txt is

  [12/Mar/2014 21:02:52] "GET /v2.0/ports.json?device_id=f4433c4f-
  4d27-492d-8794-77674a634c3f HTTP/1.1" 200 1912 0.027403

  I grepped through the q-svc.txt log file and I don't see anyplace
  where the DELETE is forwarded on.

  

  Did the test not wait long enough for the operation to complete? Was
  the shutdown request ignored?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291669] [NEW] Trove - Database Backup Quota Error Message Masked

2014-03-12 Thread Auston McReynolds
Public bug reported:

If the database-backup quota for a tenant is exhausted, attempting to
create a backup will result in the following red balloon "Error: Error
creating database backup.".

This error message is not helpful, and is actually masking the true
error message of "OverLimit: Quota exceeded for resources: ['backups']
(HTTP 413)".

** Affects: horizon
 Importance: Undecided
 Assignee: Auston McReynolds (amcrn)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Auston McReynolds (amcrn)

** Description changed:

  If the database-backup quota for a tenant is exhausted, attempting to
- create a backup with result in the following red balloon "Error: Error
+ create a backup will result in the following red balloon "Error: Error
  creating database backup.".
  
- This error message is not helpful, and is actualy masking the true error
- message of "OverLimit: Quota exceeded for resources: ['backups'] (HTTP
- 413)".
+ This error message is not helpful, and is actually masking the true
+ error message of "OverLimit: Quota exceeded for resources: ['backups']
+ (HTTP 413)".

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291669

Title:
  Trove - Database Backup Quota Error Message Masked

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If the database-backup quota for a tenant is exhausted, attempting to
  create a backup will result in the following red balloon "Error: Error
  creating database backup.".

  This error message is not helpful, and is actually masking the true
  error message of "OverLimit: Quota exceeded for resources: ['backups']
  (HTTP 413)".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291646] [NEW] Make the VMware datastore backend more robust

2014-03-12 Thread Arnaud Legendre
Public bug reported:

Several issues to address:

- need better error handling for the add() operation: need to catch exception 
when httplib fails, also need to log when the response is not OK or CREATED.
- need to handle cases where the store_image_dir contains non expected 
characters. It should support the following use cases:
/openstack_glance
openstack_glance
openstack_glance/
openstack glance  -> this one should fail with logging
openstack+glance
etc.
- need to quote special characters

** Affects: glance
 Importance: Undecided
 Assignee: Arnaud Legendre (arnaudleg)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Arnaud Legendre (arnaudleg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1291646

Title:
  Make the VMware datastore backend more robust

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  Several issues to address:

  - need better error handling for the add() operation: need to catch exception 
when httplib fails, also need to log when the response is not OK or CREATED.
  - need to handle cases where the store_image_dir contains non expected 
characters. It should support the following use cases:
  /openstack_glance
  openstack_glance
  openstack_glance/
  openstack glance  -> this one should fail with logging
  openstack+glance
  etc.
  - need to quote special characters

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1291646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291639] [NEW] tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_hotplug_nic failed in CI

2014-03-12 Thread Ben Nemec
Public bug reported:

I don't think this is related to my change and it doesn't reproduce
locally for me, so I'm going to recheck it as a bug.  My best guess as
to the cause of the failure is
http://logs.openstack.org/37/61037/22/check/check-tempest-dsvm-
neutron/9aee88a/logs/screen-q-svc.txt.gz#_2014-03-12_18_15_04_904

OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded;
try restarting transaction') 'UPDATE ports SET status=%s WHERE ports.id
= %s' ('DOWN', '398c88d4-5ca7-4225-94a2-ec3b2e11300d')

The change that failed is https://review.openstack.org/#/c/61037/22

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291639

Title:
  tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_hotplug_nic
  failed in CI

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I don't think this is related to my change and it doesn't reproduce
  locally for me, so I'm going to recheck it as a bug.  My best guess as
  to the cause of the failure is
  http://logs.openstack.org/37/61037/22/check/check-tempest-dsvm-
  neutron/9aee88a/logs/screen-q-svc.txt.gz#_2014-03-12_18_15_04_904

  OperationalError: (OperationalError) (1205, 'Lock wait timeout
  exceeded; try restarting transaction') 'UPDATE ports SET status=%s
  WHERE ports.id = %s' ('DOWN', '398c88d4-5ca7-4225-94a2-ec3b2e11300d')

  The change that failed is https://review.openstack.org/#/c/61037/22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269246] Re: some ariables name is not friendly

2014-03-12 Thread Mark McClain
You are correct that is it bad to name variables with same name as
builtins. We are planning work in Juno to significantly revamp plugins
and this is one of the items to fix.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269246

Title:
  some ariables name is not friendly

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  in neutron/db/db_base_plugin_v2.py, 
  the code is:
   def _generate_ip(context, subnets):
  """Generate an IP address.

  The IP address will be generated from one of the subnets defined on
  the network.
  """
  range_qry = context.session.query(
  models_v2.IPAvailabilityRange).join(
  models_v2.IPAllocationPool).with_lockmode('update')
  for subnet in subnets:
  range = range_qry.filter_by(subnet_id=subnet['id']).first()
  if not range:
  LOG.debug(_("All IPs from subnet %(subnet_id)s (%(cidr)s) "
  "allocated"),
{'subnet_id': subnet['id'], 'cidr': subnet['cidr']})
  continue
  ip_address = range['first_ip']
  LOG.debug(_("Allocated IP - %(ip_address)s from %(first_ip)s "
  "to %(last_ip)s"),
{'ip_address': ip_address,
 'first_ip': range['first_ip'],
 'last_ip': range['last_ip']})
  if range['first_ip'] == range['last_ip']:
  # No more free indices on subnet => delete
  LOG.debug(_("No more free IP's in slice. Deleting allocation "
  "pool."))
  context.session.delete(range)
  else:
  # increment the first free
  range['first_ip'] = str(netaddr.IPAddress(ip_address) + 1)
  return {'ip_address': ip_address, 'subnet_id': subnet['id']}
  raise 
q_exc.IpAddressGenerationFailure(net_id=subnets[0]['network_id'])

  @staticmethod
  def _allocate_specific_ip(context, subnet_id, ip_address):
  """Allocate a specific IP address on the subnet."""
  ip = int(netaddr.IPAddress(ip_address))
  range_qry = context.session.query(
  models_v2.IPAvailabilityRange).join(
  models_v2.IPAllocationPool).with_lockmode('update')
  results = range_qry.filter_by(subnet_id=subnet_id)
  for range in results:
  first = int(netaddr.IPAddress(range['first_ip']))
  last = int(netaddr.IPAddress(range['last_ip']))
  if first <= ip <= last:
  if first == last:
  context.session.delete(range)
  return
  elif first == ip:
  range['first_ip'] = str(netaddr.IPAddress(ip_address) + 1)
  return
  elif last == ip:
  range['last_ip'] = str(netaddr.IPAddress(ip_address) - 1)
  return
  else:
  # Split into two ranges
  new_first = str(netaddr.IPAddress(ip_address) + 1)
  new_last = range['last_ip']
  range['last_ip'] = str(netaddr.IPAddress(ip_address) - 1)
  ip_range = models_v2.IPAvailabilityRange(
  allocation_pool_id=range['allocation_pool_id'],
  first_ip=new_first,
  last_ip=new_last)
  context.session.add(ip_range)
  return

  func use range as ariables name, I think it is not friendly, the name is same 
as range(), 
  it can replace by ip_range or othername.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276587] Re: neutron not working on Havana Debian wheezy

2014-03-12 Thread Mark McClain
That is not an official installation guide, so you will need to contact
the author of that guide for assistance.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276587

Title:
  neutron not working on Havana Debian wheezy

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  i follow  this Guide to install the Havana , but found that neutron
  not working, service start, have pid file but not process and no
  service listen on 9696. 

  i apt-get update & dist-upgrade to the lastest env.
  root@ops-whz-ctl:~# uname -an
  Linux ops-whz-ctl 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux

  pls give me some solution. thanks a lot

  https://github.com/reusserl/OpenStack-Install-
  Guide/blob/master/OpenStack_Havana_Debian_Wheezy_Install_Guide.rst

  
  root@ops-whz-ctl:~# keystone service-list
  
+--+--+--+--+
  |id|   name   |   type   |
  description  |
  
+--+--+--+--+
  | 4557b26cfafe4808963d3eccae4684aa |  cinder  |  volume  |   OpenStack
  Volume Service   |
  | b904a4f7eadc40ddbff16f84556f201e |   ec2|   ec2|
  OpenStack EC2 service |
  | 03349e78b51b4647b4449c90bf27e7b1 |  glance  |  image   |   OpenStack
  Image Service|
  | 8fe41661d319454185e324344df34efb | keystone | identity |
  OpenStack Identity  |
  | 44d4feabf3e745fd9c56d6969489d058 | neutron  | network  | OpenStack
  Networking service |
  | 6c035052679143f187e87d9ec1486ad9 |   nova   | compute  |  OpenStack
  Compute Service   |
  
+--+--+--+--+

  root@ops-whz-ctl:~# grep -r -i "neutron" /etc/nova
  /etc/nova/nova.conf:#nova.network.neutronv2.api.API (if you want
  to use Neutron)
  /etc/nova/nova.conf:network_api_class=nova.network.neutronv2.api.API
  /etc/nova/nova.conf:#  neutron (if you use neutron)
  /etc/nova/nova.conf:security_group_api = neutron
  /etc/nova/nova.conf:# When using Neutron and OVS, use:
  nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver
  /etc/nova/nova.conf:# for Neutron, use:
  nova.network.linux_net.LinuxOVSInterfaceDriver
  /etc/nova/nova.conf:# For Neutron and OVS, use:
  nova.virt.firewall.NoopFirewallDriver (since this is handled by
  Neutron)
  /etc/nova/nova.conf:# Neutron #
  /etc/nova/nova.conf:# This is the URL of your neutron server:
  /etc/nova/nova.conf:neutron_url=http://10.10.10.51:9696
  /etc/nova/nova.conf:neutron_auth_strategy=keystone
  /etc/nova/nova.conf:neutron_admin_tenant_name=service
  /etc/nova/nova.conf:neutron_admin_username=neutron
  /etc/nova/nova.conf:neutron_admin_password=servicePass123
  /etc/nova/nova.conf:neutron_admin_auth_url=http://10.10.10.51:35357/v2.0
  /etc/nova/nova.conf:# Set flag to indicate Neutron will proxy metadata 
requests
  /etc/nova/nova.conf:# and resolve instance ids. This is needed to use
  neutron-metadata-agent
  /etc/nova/nova.conf:# which doesn't work with neutron) (boolean value)
  /etc/nova/nova.conf:service_neutron_metadata_proxy=True
  /etc/nova/nova.conf:# Shared secret to validate proxies Neutron
  metadata requests
  /etc/nova/nova.conf:# This password should match what is in
  /etc/neutron/metadata_agent.ini
  /etc/nova/nova.conf:#neutron_metadata_proxy_shared_secret=
  /etc/nova/nova.conf:neutron_metadata_proxy_shared_secret = helloOpenStack123


  root@ops-whz-ctl:~# grep -v ^$ /etc/neutron/neutron.conf |grep -v ^#
  [DEFAULT]
   verbose = True
   state_path = /var/lib/neutron
  lock_path = $state_path/lock
  core_plugin = 
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
  service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
  rabbit_host = 10.10.10.51
  rabbit_password = guest
  rabbit_userid = guest
  notification_driver = neutron.openstack.common.notifier.rpc_notifier
  [quotas]
  [agent]
  root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
  [keystone_authtoken]
  auth_host = 10.10.10.51
  auth_port = 35357
  auth_protocol = http
  admin_tenant_name = service
  admin_user = neutron
  admin_password = servicePass123
  signing_dir = $state_path/keystone-signing
  [database]
  connection = mysql://neutronUser:neutronPass357@10.10.10.51/neutron
  [service_providers]
  
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  
  root@ops-whz-ctl:~# grep -v ^$ /etc/neutron/ |grep -v ^#
  api-paste.ini neutron.conf  policy.json   rootwrap.d/
  fwaas_driver.ini  plugins/  rootwrap.conf
  root@ops-whz-ctl:~# grep -v ^$ /etc/neutron/api-paste.ini |grep -v ^#
  [composite:neutron]
  use = egg:Paste#urlmap
  /: n

[Yahoo-eng-team] [Bug 1291637] [NEW] memcache client race

2014-03-12 Thread Peter Feiner
Public bug reported:

Nova uses thread-unsafe memcache client objects in multiple threads. For
instance, nova-api's metadata WSGI server uses the same
nova.api.metadata.handler.MetadataRequestHandler._cache object for every
request. A memcache client object is thread unsafe because it has a
single open socket connection to memcached. Thus the multiple threads
will read from & write to the same socket fd.

Keystoneclient has the same bug. See https://bugs.launchpad.net/python-
keystoneclient/+bug/1289074 for a patch to fix the problem.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291637

Title:
  memcache client race

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova uses thread-unsafe memcache client objects in multiple threads.
  For instance, nova-api's metadata WSGI server uses the same
  nova.api.metadata.handler.MetadataRequestHandler._cache object for
  every request. A memcache client object is thread unsafe because it
  has a single open socket connection to memcached. Thus the multiple
  threads will read from & write to the same socket fd.

  Keystoneclient has the same bug. See https://bugs.launchpad.net
  /python-keystoneclient/+bug/1289074 for a patch to fix the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291617] [NEW] guest vm hostname is not set to the instance name

2014-03-12 Thread Meena Pitchiah
Public bug reported:

Create a new VM with a name like "testvm1" with the  default cirros image
Wait for it to be acitve and SSH able.
ssh to the guest VM just created.
check the hostname - it is "cirros" instead of the expected hostname "testvm1"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291617

Title:
  guest vm hostname is not set to the instance name

Status in OpenStack Compute (Nova):
  New

Bug description:
  Create a new VM with a name like "testvm1" with the  default cirros image
  Wait for it to be acitve and SSH able.
  ssh to the guest VM just created.
  check the hostname - it is "cirros" instead of the expected hostname "testvm1"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291620] [NEW] test_create_image_with_reboot fails with UnexpectedTaskStateError in gate-nova-python*

2014-03-12 Thread Matt Riedemann
Public bug reported:

Similar to bug 1266611 but with a different failure:

http://logs.openstack.org/63/71063/15/gate/gate-nova-
python27/9ecf426/console.html.gz

http://logs.openstack.org/47/69047/15/check/gate-nova-
python26/38dcf86/console.html#_2014-03-09_23_37_24_801

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5leHBlY3RlZFRhc2tTdGF0ZUVycm9yOiBVbmV4cGVjdGVkIHRhc2sgc3RhdGU6IGV4cGVjdGluZyBbTm9uZV0gYnV0IHRoZSBhY3R1YWwgc3RhdGUgaXMgcG93ZXJpbmctb2ZmXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIgQU5EIChidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI3XCIgT1IgYnVpbGRfbmFtZTpcImdhdGUtbm92YS1weXRob24yNlwiKSIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDY1OTA0MTkwMX0=

4 hits in 7 days, and this shows up on the unclassified e-r bugs page.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291620

Title:
  test_create_image_with_reboot fails with UnexpectedTaskStateError in
  gate-nova-python*

Status in OpenStack Compute (Nova):
  New

Bug description:
  Similar to bug 1266611 but with a different failure:

  http://logs.openstack.org/63/71063/15/gate/gate-nova-
  python27/9ecf426/console.html.gz

  http://logs.openstack.org/47/69047/15/check/gate-nova-
  python26/38dcf86/console.html#_2014-03-09_23_37_24_801

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5leHBlY3RlZFRhc2tTdGF0ZUVycm9yOiBVbmV4cGVjdGVkIHRhc2sgc3RhdGU6IGV4cGVjdGluZyBbTm9uZV0gYnV0IHRoZSBhY3R1YWwgc3RhdGUgaXMgcG93ZXJpbmctb2ZmXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIgQU5EIChidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI3XCIgT1IgYnVpbGRfbmFtZTpcImdhdGUtbm92YS1weXRob24yNlwiKSIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDY1OTA0MTkwMX0=

  4 hits in 7 days, and this shows up on the unclassified e-r bugs page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274107] Re: Resources convert from plurals to single is manual

2014-03-12 Thread Mark McClain
Extensions are being redesigned in Juno, so pluralization will be
handled differently.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274107

Title:
  Resources convert from plurals to single is manual

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  In Neutron extensions resources building process, plural and single resource 
names are converted manually by adding or removing an 's' ending. For special 
cases, when 's' ending is wrong, for instance policy/policies, the conversion 
is done by hard codding.
  Each extension does it, and does it bit differently.
  This plural<->single conversion is done in several places.

  The proposal is to add two functions (plural2single and single2plural) to the 
neutron.common.utils code,
  so these functions will be used for conversions anywhere is needed.

  Those new functions should consider some common (or all, which is
  maybe unnecessary) special English language exceptions for single-
  plural converting - http://en.wikipedia.org/wiki/English_plurals

  
  As a result of this bug approval, all occurrences of plural<->single 
conversions should be replaced by using common.util functions

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239891] Re: tempest.api.object_storage.test_account_services.AccountTest fails under neutron-pg-isolated

2014-03-12 Thread Mark McClain
This is currently passing in current gate tests.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239891

Title:
  tempest.api.object_storage.test_account_services.AccountTest fails
  under neutron-pg-isolated

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  http://logs.openstack.org/38/51738/1/check/check-tempest-devstack-vm-
  neutron-pg-isolated/73aad7a/console.html

  2013-10-15 00:19:04.556 | Error in atexit._run_exitfuncs:
  2013-10-15 00:19:04.556 | Traceback (most recent call last):
  2013-10-15 00:19:04.556 |   File "/usr/lib/python2.7/atexit.py", line 24, in 
_run_exitfuncs
  2013-10-15 00:19:04.557 | func(*targs, **kargs)
  2013-10-15 00:19:04.558 |   File "tempest/test.py", line 167, in 
validate_tearDownClass
  2013-10-15 00:19:04.558 | + str(at_exit_set))
  2013-10-15 00:19:04.558 | RuntimeError: tearDownClass does not calls the 
super's tearDownClass in these classes: set([])
  2013-10-15 00:19:04.559 | Error in sys.exitfunc:
  2013-10-15 00:19:04.663 | 
  2013-10-15 00:19:04.664 | process-returncode
  2013-10-15 00:19:04.664 | process-returncode ... FAIL
  2013-10-15 00:19:04.980 | 
  2013-10-15 00:19:04.981 | 
==
  2013-10-15 00:19:04.981 | FAIL: tearDownClass 
(tempest.api.object_storage.test_account_services.AccountTest)
  2013-10-15 00:19:04.981 | tearDownClass 
(tempest.api.object_storage.test_account_services.AccountTest)
  2013-10-15 00:19:04.982 | 
--
  2013-10-15 00:19:04.982 | _StringException: Traceback (most recent call last):
  2013-10-15 00:19:04.982 |   File 
"tempest/api/object_storage/test_account_services.py", line 41, in tearDownClass
  2013-10-15 00:19:04.983 | super(AccountTest, cls).tearDownClass()
  2013-10-15 00:19:04.983 |   File "tempest/api/object_storage/base.py", line 
77, in tearDownClass
  2013-10-15 00:19:04.983 | cls.isolated_creds.clear_isolated_creds()
  2013-10-15 00:19:04.984 |   File "tempest/common/isolated_creds.py", line 
453, in clear_isolated_creds
  2013-10-15 00:19:04.984 | self._clear_isolated_net_resources()
  2013-10-15 00:19:04.984 |   File "tempest/common/isolated_creds.py", line 
445, in _clear_isolated_net_resources
  2013-10-15 00:19:04.985 | self._clear_isolated_network(network['id'], 
network['name'])
  2013-10-15 00:19:04.985 |   File "tempest/common/isolated_creds.py", line 
399, in _clear_isolated_network
  2013-10-15 00:19:04.985 | net_client.delete_network(network_id)
  2013-10-15 00:19:04.985 |   File 
"tempest/services/network/json/network_client.py", line 76, in delete_network
  2013-10-15 00:19:04.986 | resp, body = self.delete(uri, self.headers)
  2013-10-15 00:19:04.986 |   File "tempest/common/rest_client.py", line 308, 
in delete
  2013-10-15 00:19:04.986 | return self.request('DELETE', url, headers)
  2013-10-15 00:19:04.987 |   File "tempest/common/rest_client.py", line 436, 
in request
  2013-10-15 00:19:04.987 | resp, resp_body)
  2013-10-15 00:19:04.987 |   File "tempest/common/rest_client.py", line 522, 
in _error_checker
  2013-10-15 00:19:04.988 | raise exceptions.ComputeFault(message)
  2013-10-15 00:19:04.988 | ComputeFault: Got compute fault
  2013-10-15 00:19:04.988 | Details: {"NeutronError": "Request Failed: internal 
server error while processing your request."}
  2013-10-15 00:19:04.988 | 
  2013-10-15 00:19:04.989 | 
  2013-10-15 00:19:04.989 | 
==
  2013-10-15 00:19:04.989 | FAIL: process-returncode
  2013-10-15 00:19:04.990 | process-returncode
  2013-10-15 00:19:04.990 | 
--
  2013-10-15 00:19:04.990 | _StringException: Binary content:
  2013-10-15 00:19:04.991 |   traceback (test/plain; charset="utf8")
  2013-10-15 00:19:04.991 | 
  2013-10-15 00:19:04.991 | 
  2013-10-15 00:19:04.991 | 
--

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1239891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291619] [NEW] Cisco VPN device drivers admin state not reported correctly

2014-03-12 Thread Paul Michali
Public bug reported:

Currently, this driver supports update of the VPN service, which one can
change the admin state to up or down.

In addition, even though IPSec site-to-site connection update is not
currently supported (one can do a delete/create), the user could create
the connection with admin state down.

When the service admin state is changed to down, the change does not
happen in the device driver, and the status is not reported correctly.
This is due to an issue with the plugin (bug 1291609 created). If later,
another change occurs that causes a sync of the config, the connections
on the VPN service will be deleted (the CSR REST API doesn't yet have
support for admin down), but the status still will not be updated
correctly. The configuration in OpenStack can get out of sync with the
configuration on the CSR.

If the IPSec site-to-site connection is created in admin down state, the
underlying tunnel is not created (correct), but the status still shows
PENDING_CREATE.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Paul Michali (pcm)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291619

Title:
  Cisco VPN device drivers admin state not reported correctly

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Currently, this driver supports update of the VPN service, which one
  can change the admin state to up or down.

  In addition, even though IPSec site-to-site connection update is not
  currently supported (one can do a delete/create), the user could
  create the connection with admin state down.

  When the service admin state is changed to down, the change does not
  happen in the device driver, and the status is not reported correctly.
  This is due to an issue with the plugin (bug 1291609 created). If
  later, another change occurs that causes a sync of the config, the
  connections on the VPN service will be deleted (the CSR REST API
  doesn't yet have support for admin down), but the status still will
  not be updated correctly. The configuration in OpenStack can get out
  of sync with the configuration on the CSR.

  If the IPSec site-to-site connection is created in admin down state,
  the underlying tunnel is not created (correct), but the status still
  shows PENDING_CREATE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212462] Re: arp fail/martian source

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1212462

Title:
  arp fail/martian source

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  openstack grizzly quantum from RDO on redhat 6.4.

  quantum net-list
  
+--+---++
  | id   | name  | subnets  
  |
  
+--+---++
  | xxx3 | demo-net1 | xxxf 10.0.0.0/24   |
  | xxx5 | ext | xxx9 192.168.0.0/24 |
  | xxx0 | main  | xxxd 10.0.2.0/24   |
  | xxxa | main  | xxx1 10.0.1.0/24   |
  
+--+---++

  If I fire up a vm on net xxxa, getting address 10.0.1.4. I then give
  it a floating ip from external, which gets 192.168.0.13

  If I ping 192.168.0.13 it doesn't work. If I leave the ping running,
  then restart l3-agent, after a minute or two, the ping starts working
  and keeps working.

  But after that, if I ctrl-c and restart a new ping, it doesn't work. Digging 
into it further, there are the following in dmesg for every failed ping:
  martian source 10.0.1.4 from 10.0.1.1, on dev qbrce632e01-80
  ll header: ff:ff:ff:ff:ff:ff:fa:16:3e:02:8a:c3:08:06

  It looks like the arp responses are getting klobbered.

  part of plugin.ini:
  [OVS]
  enable_tunneling=False
  integration_bridge=br-int
  tenant_network_type=vlan
  bridge_mappings=os1:br-os1,ext:br-ext
  network_vlan_ranges=os1:1000:2000,external1

  ext is a provider type flat network attached to br-ext1

  Any ideas what would cause arp to work for a moment during l3-agent
  restart then fail again?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1212462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1157771] Re: Use auto-deleted queues prevent rpc flood

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1157771

Title:
  Use auto-deleted queues prevent rpc flood

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Currently we use 'call' instead of 'cast' for prevent rpc flood. But
  'call' doesn't scale(https://docs.google.com/file/d/0B-
  droFdkDaVhVzhsN3RKRlFLODQ/edit).

  The flood reason was all the agents send message to the queue, but
  without quantum server, all the message will buffered in the queue.
  After quantum server startup, all the messages that buffered in queue
  will flood quantum server.

  We can set the queue as auto_delete. After quantum-server stopped, the
  queue will be deleted automatically. Messages that sent by agent will
  be dropped. Then the agent won't flood the quantum-server when it
  startup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1157771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210150] Re: linuxbridge unit tests are unstable when are run alone

2014-03-12 Thread Mark McClain
Works ok for me.  I'm going to close since this section of code is
scheduled for removal in during Juno.

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210150

Title:
  linuxbridge unit tests are unstable when are run alone

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  When linuxbridge tests are run alone (either with 'tox  -epy27
  linuxbridge' or '.venv/bin/python run_tests.py linuxbridge') they fail
  with plenty of different errors from time to time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214215] Re: multiple quantum metadata agent issue

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1214215

Title:
  multiple quantum metadata agent issue

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  (Grizzly)in the network node, I am running 1 dhcp agent, 1 l3-agent
  and 2 metadata agents. one metadata agent points to nova metadata
  service in one Openstack cloud and an other metadata agent points to
  nova metadata service in another Openstack cloud. Both use and connect
  to the same unix domain socket /var/lib/quantum/metadata_proxy. (Two
  clouds share the same network node).  With two metadata agents
  started, it always chooses the last metadata agent started and any
  instance from either cluster always tries to reach the metadata server
  that is configured in the last metadata agent. (so, launching is
  successful from one cloud and metadata access fails on the other)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1214215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220021] Re: need xml format for some unittest which inherits from WebTestCase

2014-03-12 Thread Mark McClain
The API tests will undergo a significant refactoring during Juno.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1220021

Title:
  need xml format for some unittest which inherits from WebTestCase

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  during refactor test_agent_scheduler, i found that some unit test case
  which inherits from WebTestCase are short of xml format, after read
  the WebTestCase source code, i think it is better to add xml format
  test, but i still have two question about this:

  - it will add many ut, because it duplicates json ut so definitely takes more 
time for test, since json format ensure the code are correct, does this 
coverage for xml make sense according to the test time cost?
  - if answer to first question is no, then how about remove the current xml 
format test from ut?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1220021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223956] Re: Router deletion tooks more than 169 sec

2014-03-12 Thread Mark McClain
This is a duplicate of several other db locking bugs.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223956

Title:
  Router deletion tooks more than 169 sec

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  http://logs.openstack.org/50/46050/3/check/gate-tempest-devstack-vm-
  neutron/e813b67/console.html

  2013-09-11 15:39:43.296 | 2013-09-11 15:33:33,460 Request: DELETE 
http://127.0.0.1:9696//v2.0/routers/0003c937-e0d8-4306-906e-3e8b6e2aa849
  2013-09-11 15:39:43.296 | 2013-09-11 15:36:22,533 Response Status: 204

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1223956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1227237] Re: quantum-dhcp-agent + quantum-dhcp-agent-dnsmasq-lease-update deadlock

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1227237

Title:
  quantum-dhcp-agent + quantum-dhcp-agent-dnsmasq-lease-update deadlock

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  NB This isn't an issue in Havana since neutron doesn't use a DHCP
  lease update script (i.e., neutron runs dnsmasq without the --dhcp-
  script=xxx option).

  Every week or so on our OpenStack cluster of modest size (about a
  dozen nodes, dozens to hundreds of VMs being booted per day),
  instances stop receiving replies to their DHCP requests. When we
  observe the instances failing to communicate with the DHCP server,
  traces of network traffic show that the DHCP requests are making it to
  the tap device that dnsmasq is listening on but that dnsmasq isn't
  sending any replies. Furthermore, dnsmasq seems to be configured to
  send the replies (i.e., the appropriate entries are in its DHCP hosts
  file and dnsmasq has been sent SIGHUP). Once the dnsmasq process stops
  sending replies, it seems to have stopped indefinitely. Killing the
  dnsmasq process and restarting quantum-dhcp-agent, which starts a new
  dnsmasq process, gets replies going again.

  The dnsmasq process seems to have stopped sending replies because of
  its DHCP script quantum-dhcp-agent-dnsmasq-lease-update (i.e., from
  the option --dhcp-script=quantum-...). According to man dnsmasq(8),
  dnsmasq runs one copy of the lease update script at a time (i.e.,
  executes it serially) and some request processing blocks on execution
  of the lease update script. Assuming the blocked request processing
  includes replying to DHCP requests, then a single deadlocked instance
  of quantum-dhcp-agent-dnsmasq-lease-update explains why no DHCP
  responses are being made.

  Indeed, the script has deadlocked. Here's a snippet of output from 'ps
  auxf' showing the relevant dnsmasq and quantum-dhcp-agent-dnsmasq-
  lease-update and quantum-dhcp-agent processes:

  quantum  16347  0.6  0.0 198756 56084 ?Ss   Sep10  71:50 python 
/usr/bin/quantum-dhcp-agent --config-file=/etc/quantum/quantum.conf 
--config-file=/etc/quantum/dhcp_agent.ini 
--log-file=/var/log/quantum/dhcp-agent.log
  ...
  nobody   19945  0.0  0.0  28820  1156 ?SSep10   6:47 dnsmasq 
--no-hosts --no-resolv --strict-order --bind-interfaces 
--interface=tap8591954d-46 --except-interface=lo 
--pid-file=/var/lib/quantum/dhcp/773075d2-3df6-45e6-8c62-251a49fb8e1f/pid 
--dhcp-hostsfile=/var/lib/quantum/dhcp/773075d2-3df6-45e6-8c62-251a49fb8e1f/host
 
--dhcp-optsfile=/var/lib/quantum/dhcp/773075d2-3df6-45e6-8c62-251a49fb8e1f/opts 
--dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro 
--dhcp-range=set:tag0,172.16.0.0,static,120s --conf-file= 
--domain=openstacklocal
  root 19948  0.0  0.0  28792   472 ?SSep10   3:02  \_ dnsmasq 
--no-hosts --no-resolv --strict-order --bind-interfaces 
--interface=tap8591954d-46 --except-interface=lo 
--pid-file=/var/lib/quantum/dhcp/773075d2-3df6-45e6-8c62-251a49fb8e1f/pid 
--dhcp-hostsfile=/var/lib/quantum/dhcp/773075d2-3df6-45e6-8c62-251a49fb8e1f/host
 
--dhcp-optsfile=/var/lib/quantum/dhcp/773075d2-3df6-45e6-8c62-251a49fb8e1f/opts 
--dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro 
--dhcp-range=set:tag0,172.16.0.0,static,120s --conf-file= 
--domain=openstacklocal
  root 24722  0.0  0.0   4292   352 ?SSep17   0:00  \_ 
quantum-dhcp-agent-dnsmasq-lease-update old fa:16:3e:17:5f:46 172.16.0.129 
172-16-0-129

  Strace shows that 19945 is continuing to do stuff, but 19948 is
  blocked in a wait4 system call waiting for its child, 24722, to exit:

  peter@gremlin ~/ssst % sudo strace -p 19948
  Process 19948 attached - interrupt to quit
  wait4(-1, ^C 

  But 24722 is blocked waiting to connect to the UDS that uses to
  communicate with quantum-dhcp-agent:

  peter@gremlin ~/ssst % sudo strace -p 24722
  Process 24722 attached - interrupt to quit
  connect(3, {sa_family=AF_FILE, path="/var/lib/quantum/dhcp/lease_relay"}, 110

  But quantum-dhcp-agent, pid 16347, is spinning epoll-ing on some epoll
  instance:

  peter@gremlin ~/ssst % sudo strace -p 16347
  Process 16347 attached - interrupt to quit
  epoll_wait(5, {}, 1023, 1549)   = 0
  read(4, "2+\214~\346\235,\6\264\320\27\"\\\17?\354", 16) = 16
  gettid()= 16347
  epoll_wait(5, {}, 1023, 826)= 0
  epoll_wait(5, {}, 1023, 3999)   = 0
  epoll_wait(5, {}, 1023, 0)  = 0
  epoll_wait(5, {}, 1023, 3999)   = 0
  epoll_wait(5, {}, 1023, 3999)   = 0
  epoll_wait(5, 

  Clearly that epoll instance doesn't include the UDS that the lease
  update script is trying to connect to in spite of having the file
  open:

  peter@gremlin ~/ssst % sudo lso

[Yahoo-eng-team] [Bug 1242898] Re: tearDownClass (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML): tearDownClass does not call the super's tearDownClass

2014-03-12 Thread Mark McClain
Is currently passing tests now that several items have been changed
since this report was filed.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1242898

Title:
  tearDownClass
  (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML):
  tearDownClass does not call the super's tearDownClass

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  From http://logs.openstack.org/32/47432/16/check/check-tempest-
  devstack-vm-neutron-pg-isolated/c2a0dd3/console.html

  ...

  tearDownClass
  (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  ... FAIL

  ...

  2013-10-21 19:17:53.068 | Error in atexit._run_exitfuncs:
  2013-10-21 19:17:53.068 | Traceback (most recent call last):
  2013-10-21 19:17:53.068 |   File "/usr/lib/python2.7/atexit.py", line 24, in 
_run_exitfuncs
  2013-10-21 19:17:53.069 | func(*targs, **kargs)
  2013-10-21 19:17:53.069 |   File "tempest/test.py", line 167, in 
validate_tearDownClass
  2013-10-21 19:17:53.069 | + str(at_exit_set))
  2013-10-21 19:17:53.069 | RuntimeError: tearDownClass does not calls the 
super's tearDownClass in these classes: set([])
  2013-10-21 19:17:53.070 | Error in sys.exitfunc:
  2013-10-21 19:17:53.221 | 
  2013-10-21 19:17:53.221 | process-returncode
  2013-10-21 19:17:53.221 | process-returncode ... FAIL
  2013-10-21 19:17:53.614 | 
  2013-10-21 19:17:53.614 | 
==
  2013-10-21 19:17:53.614 | FAIL: tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-10-21 19:17:53.614 | tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-10-21 19:17:53.614 | 
--
  2013-10-21 19:17:53.614 | _StringException: Traceback (most recent call last):
  2013-10-21 19:17:53.615 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 95, in tearDownClass
  2013-10-21 19:17:53.615 | super(ServerRescueTestJSON, cls).tearDownClass()
  2013-10-21 19:17:53.615 |   File "tempest/api/compute/base.py", line 132, in 
tearDownClass
  2013-10-21 19:17:53.615 | cls.isolated_creds.clear_isolated_creds()
  2013-10-21 19:17:53.615 |   File "tempest/common/isolated_creds.py", line 
453, in clear_isolated_creds
  2013-10-21 19:17:53.615 | self._clear_isolated_net_resources()
  2013-10-21 19:17:53.615 |   File "tempest/common/isolated_creds.py", line 
445, in _clear_isolated_net_resources
  2013-10-21 19:17:53.616 | self._clear_isolated_network(network['id'], 
network['name'])
  2013-10-21 19:17:53.616 |   File "tempest/common/isolated_creds.py", line 
399, in _clear_isolated_network
  2013-10-21 19:17:53.616 | net_client.delete_network(network_id)
  2013-10-21 19:17:53.616 |   File 
"tempest/services/network/json/network_client.py", line 76, in delete_network
  2013-10-21 19:17:53.616 | resp, body = self.delete(uri, self.headers)
  2013-10-21 19:17:53.616 |   File "tempest/common/rest_client.py", line 308, 
in delete
  2013-10-21 19:17:53.617 | return self.request('DELETE', url, headers)
  2013-10-21 19:17:53.617 |   File "tempest/common/rest_client.py", line 436, 
in request
  2013-10-21 19:17:53.617 | resp, resp_body)
  2013-10-21 19:17:53.617 |   File "tempest/common/rest_client.py", line 522, 
in _error_checker
  2013-10-21 19:17:53.617 | raise exceptions.ServerFault(message)
  2013-10-21 19:17:53.617 | ServerFault: Got server fault
  2013-10-21 19:17:53.617 | Details: {"NeutronError": "Request Failed: internal 
server error while processing your request."}
  2013-10-21 19:17:53.618 | 
  2013-10-21 19:17:53.618 | 
  2013-10-21 19:17:53.618 | 
==
  2013-10-21 19:17:53.618 | FAIL: process-returncode
  2013-10-21 19:17:53.619 | process-returncode
  2013-10-21 19:17:53.619 | 
--
  2013-10-21 19:17:53.619 | _StringException: Binary content:
  2013-10-21 19:17:53.619 |   traceback (test/plain; charset="utf8")
  2013-10-21 19:17:53.619 | 
  2013-10-21 19:17:53.619 | 
  2013-10-21 19:17:53.620 | 
--
  2013-10-21 19:17:53.620 | Ran 237 tests in 914.828s
  2013-10-21 19:17:53.637 | 
  2013-10-21 19:17:53.638 | FAILED (failures=2, skipped=8)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1242898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237711] Re: Creating instance on network with no subnet: no error message

2014-03-12 Thread Mark McClain
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1237711

Title:
  Creating instance on network with no subnet: no error message

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  When trying to launch an instance on a network without any subnet the
  creation fails. No error message is provided even though it is clear
  the issue is due to the lack of a subnet. No entry visible in the log
  for that instance.

  nova scheduler log:
  --
  l2013-10-09 15:14:35.249 INFO nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Attempting to build 1 
instance(s) uuids: [u'0d2a3866-23b0-4f85-9689-f4b37877e950']
  2013-10-09 15:14:35.279 INFO nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Choosing host 
WeighedHost [host: kraken-vc1-ubuntu1, weight: 252733.0] for instance 
0d2a3866-23b0-4f85-9689-f4b37877e950
  2013-10-09 15:14:38.028 INFO nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Attempting to build 1 
instance(s) uuids: [u'0d2a3866-23b0-4f85-9689-f4b37877e950']
  2013-10-09 15:14:38.030 ERROR nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] [instance: 
0d2a3866-23b0-4f85-9689-f4b37877e950] Error from last host: kraken-vc1-ubuntu1 
(node domain-c21(kraken-vc1)): [u'Traceback (most recent call last):\n', u'  
File "/opt/stack/nova/nova/compute/manager.py", line 1039, in _build_instance\n 
   set_access_ip=set_access_ip)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1412, in _spawn\n
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1409, in _spawn\n
block_device_info)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 623, in spawn\n
admin_password, network_info, block_device_info)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 243, in spawn\n
vif_infos = _get_vif_infos()\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 227, in _get_vif_infos\n   
 for vif 
 in network_info:\n', u'  File "/opt/stack/nova/nova/network/model.py", line 
375, in __iter__\nreturn self._sync_wrapper(fn, *args, **kwargs)\n', u'  
File "/opt/stack/nova/nova/network/model.py", line 366, in _sync_wrapper\n
self.wait()\n', u'  File "/opt/stack/nova/nova/network/model.py", line 398, in 
wait\nself[:] = self._gt.wait()\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in 
wait\nreturn self._exit_event.wait()\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 120, in wait\n 
   current.throw(*self._exc)\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in 
main\nresult = function(*args, **kwargs)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1230, in 
_allocate_network_async\ndhcp_options=dhcp_options)\n', u'  File 
"/opt/stack/nova/nova/network/api.py", line 49, in wrapper\nres = f(self, 
context, *args, **kwargs)\n', u'  File "/o
 pt/stack/nova/nova/network/neutronv2/api.py", line 315, in 
allocate_for_instance\nraise exception.SecurityGroupCannotBeApplied()\n', 
u'SecurityGroupCannotBeApplied: Network requires port_security_enabled and 
subnet associated in order to apply security groups.\n']
  2013-10-09 15:14:38.055 WARNING nova.scheduler.driver 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] [instance: 
0d2a3866-23b0-4f85-9689-f4b37877e950] Setting instance to ERROR state

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1237711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291609] [NEW] VPNaaS admin up/down on service/connection broken

2014-03-12 Thread Paul Michali
Public bug reported:

With an active IPSec site-to-site connection, if the VPN service state
is changed to admin down, the connection and service states still show
as ACTIVE and traffic still flows over the connection.


If the IPsec site-to-site connection state is changed to admin down, the 
connection still shows as ACTIVE and traffic passes as well.

Does not look like the update_vpnservice API forwards to the device
driver.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Paul Michali (pcm)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291609

Title:
  VPNaaS admin up/down on service/connection broken

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  With an active IPSec site-to-site connection, if the VPN service state
  is changed to admin down, the connection and service states still show
  as ACTIVE and traffic still flows over the connection.

  
  If the IPsec site-to-site connection state is changed to admin down, the 
connection still shows as ACTIVE and traffic passes as well.

  Does not look like the update_vpnservice API forwards to the device
  driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268274] Re: KeyError in _get_server_ip

2014-03-12 Thread Joe Gordon
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268274

Title:
  KeyError in _get_server_ip

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Incomplete
Status in Tempest:
  Fix Committed

Bug description:
  In gate-tempest-dsvm-neutron test:

  2014-01-11 16:39:43.311 | Traceback (most recent call last):
  2014-01-11 16:39:43.311 |   File 
"tempest/scenario/test_cross_tenant_connectivity.py", line 482, in 
test_cross_tenant_traffic
  2014-01-11 16:39:43.311 | self._test_in_tenant_block(self.demo_tenant)
  2014-01-11 16:39:43.311 |   File 
"tempest/scenario/test_cross_tenant_connectivity.py", line 380, in 
_test_in_tenant_block
  2014-01-11 16:39:43.311 | ip=self._get_server_ip(server),
  2014-01-11 16:39:43.311 |   File 
"tempest/scenario/test_cross_tenant_connectivity.py", line 326, in 
_get_server_ip
  2014-01-11 16:39:43.311 | return server.networks[network_name][0]
  2014-01-11 16:39:43.312 | KeyError: u'network-smoke--tempest-1504528870'

  http://logs.openstack.org/39/65039/4/gate/gate-tempest-dsvm-
  neutron/cb3457d/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1268274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231930] Re: Rules dissapear after 300 seconds of inactivity

2014-03-12 Thread Mark McClain
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1231930

Title:
  Rules dissapear after 300 seconds of inactivity

Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Using Feodra 19 for TripleO

  In the TripleO overcloud I can successfully connect to VM's but soon
  after loose ssh connections,

  I think this is after at least 5 minutes of inactivity, I'm observing
  the learned rule in table 20 vanish if the hard_age hits 300, see the
  two snippets from "ovs-ofctl dump-flows br-tun" with the learned rule
  and then gone the next second

  
  Fri 27 Sep 09:49:14 UTC 2013

   cookie=0x0, duration=1930.786s, table=10, n_packets=247, n_bytes=32673, 
idle_age=300, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:
  
0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
   cookie=0x0, duration=800.717s, table=20, n_packets=231, n_bytes=23336, 
hard_timeout=300, idle_age=300, hard_age=300, 
priority=1,vlan_tci=0x0002/0x0fff,dl_dst=fa:16:3e:e4:de:d6 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
   

  
  Fri 27 Sep 09:49:15 UTC 2013

   cookie=0x0, duration=1931.798s, table=10, n_packets=247,
  n_bytes=32673, idle_age=301, priority=1
  
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1

  Is the correct thing to do just to remove the hard_timeout=300, this
  seems to work for me

  diff --git a/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py 
b/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py
  index eefe384..62c87e3 100644
  --- a/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py
  +++ b/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py
  @@ -715,7 +715,6 @@ class 
OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
   # adresses (assumes that lvid has already been set by a previous 
flow)
   learned_flow = ("table=%s,"
   "priority=1,"
  -"hard_timeout=300,"
   "NXM_OF_VLAN_TCI[0..11],"
   "NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],"
   "load:0->NXM_OF_VLAN_TCI[],"

  
  or should this rule just reappear when I try to reconnect, I've also observed 
the rule returning when I try to connect, and I can then connect so the loss of 
connectivity doesn't always happen after 5 minutes. But the rule not 
reappearing seems to line up with times I can't ssh to the VM.

  Also its worth noting that the hard_timeout is whats being set but it
  appears to be acting more like a idle_timeout although I may be
  understanding something wroge here, ovs newbie...

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1231930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233293] Re: tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops

2014-03-12 Thread Mark McClain
This has been addressed via other work.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1233293

Title:
  
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  2013-09-30 17:10:30.041 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops Traceback (most recent call last):
  2013-09-30 17:10:30.041 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops   File 
"tempest/scenario/test_network_basic_ops.py", line 254, in 
_check_public_network_connectivity
  2013-09-30 17:10:30.041 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops private_key)
  2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops   File "tempest/scenario/manager.py", 
line 622, in _check_vm_connectivity
  2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops "reachable" % ip_address)
  2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops   File 
"/usr/lib/python2.7/unittest/case.py", line 420, in assertTrue
  2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops raise self.failureException(msg)
  2013-09-30 17:10:30.042 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops AssertionError: Timed out waiting for 
172.24.4.232 to become reachable
  2013-09-30 17:10:30.043 | 2013-09-30 17:08:12.589 19033 TRACE 
tempest.scenario.test_network_basic_ops 
  2013-09-30 17:10:30.044 | 2013-09-30 17:08:12,677 Host Addr:
  2013-09-30 17:10:30.045 | sudo: no tty present and no askpass program 
specified
  2013-09-30 17:10:30.045 | Sorry, try again.
  2013-09-30 17:10:30.047 | sudo: no tty present and no askpass program 
specified
  2013-09-30 17:10:30.047 | Sorry, try again.
  2013-09-30 17:10:30.047 | sudo: no tty present and no askpass program 
specified
  2013-09-30 17:10:30.051 | Sorry, try again.
  2013-09-30 17:10:30.051 | sudo: 3 incorrect password attempts
  ...
  2013-09-30 17:10:30.595 | }}}
  2013-09-30 17:10:30.595 | 
  2013-09-30 17:10:30.595 | Traceback (most recent call last):
  2013-09-30 17:10:30.595 |   File 
"tempest/scenario/test_network_basic_ops.py", line 269, in 
test_network_basic_ops
  2013-09-30 17:10:30.596 | self._check_public_network_connectivity()
  2013-09-30 17:10:30.596 |   File 
"tempest/scenario/test_network_basic_ops.py", line 258, in 
_check_public_network_connectivity
  2013-09-30 17:10:30.596 | raise exc
  2013-09-30 17:10:30.596 | AssertionError: Timed out waiting for 172.24.4.232 
to become reachable
  2013-09-30 17:10:30.596 | 
  2013-09-30 17:10:30.597 | 
  2013-09-30 17:10:30.597 | 
==
  2013-09-30 17:10:30.598 | FAIL: process-returncode
  2013-09-30 17:10:30.598 | process-returncode

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1233293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231815] Re: When chose linuxBridge as the pluggin with Quantum, service quantum-server on controller node didn't populate quantum_linux_bridge database

2014-03-12 Thread Mark McClain
Unable to reproduce.  If this is still an issue with the Icehouse
branch, please refile.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1231815

Title:
  When chose linuxBridge as the pluggin with Quantum, service quantum-
  server on controller node didn't populate quantum_linux_bridge
  database

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The controller node throw following errors after quantum-server
  started. Mysql has an DB engin named quantum_linux_bridge and the
  service account quantum can access the DB and has full rights to it.

  2013-09-27 04:49:05ERROR [quantum.openstack.common.rpc.amqp] Exception 
during message handling
  Traceback (most recent call last):
File 
"/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 
430, in _process_data
  rval = self.proxy.dispatch(ctxt, version, method, **args)
File "/usr/lib/python2.6/site-packages/quantum/common/rpc.py", line 43, in 
dispatch
  quantum_ctxt, version, method, **kwargs)
File 
"/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/dispatcher.py", 
line 133, in dispatch
  return getattr(proxyobj, method)(ctxt, **kwargs)
File "/usr/lib/python2.6/site-packages/quantum/db/agents_db.py", line 173, 
in report_state
  plugin.create_or_update_agent(context, agent_state)
File "/usr/lib/python2.6/site-packages/quantum/db/agents_db.py", line 145, 
in create_or_update_agent
  context, agent['agent_type'], agent['host'])
File "/usr/lib/python2.6/site-packages/quantum/db/agents_db.py", line 121, 
in _get_agent_by_type_and_host
  Agent.host == host).one()
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py",
 line 2184, in one
  ret = list(self)
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py",
 line 2227, in __iter__
  return self._execute_and_instances(context)
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py",
 line 2242, in _execute_and_instances
  result = conn.execute(querycontext.statement, self._params)
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 line 1449, in execute
  params)
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 line 1584, in _execute_clauseelement
  compiled_sql, distilled_params
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 line 1698, in _execute_context
  context)
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 line 1691, in _execute_context
  context)
File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py",
 line 331, in do_execute
  cursor.execute(statement, parameters)
  OperationalError: (OperationalError) no such table: agents u'SELECT agents.id 
AS agents_id, agents.agent_type AS agents_agent_type, agents.binary AS 
agents_binary, agents.topic AS agents_topic, agents.host AS agents_host, 
agents.admin_state_up AS agents_admin_state_up, agents.created_at AS 
agents_created_at, agents.started_at AS agents_started_at, 
agents.heartbeat_timestamp AS agents_heartbeat_timestamp, agents.description AS 
agents_description, agents.configurations AS agents_configurations \nFROM 
agents \nWHERE agents.agent_type = ? AND agents.host = ?' (u'Linux bridge 
agent', u'wpc0051.svc.cld1.eng.pdx.wd')

  # Configuration files 

  [root@wpc0051 etc]# cat /etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini
  [vlans]
  tenant_network_type = vlan
  network_vlan_ranges = e2vm:2048:4094,e2chef:2048:4094
  [database]
  sql_connection = mysql://quantum:@10.52.202.252/quantum_linux_bridge
  reconnect_interval = 2
  [linux_bridge]
  physical_interface_mappings = e2vm:br-e2vm,e2chef:br-e2chef
  [SECURITYGROUP]
  # Firewall driver for realizing quantum security group function
  firewall_driver = quantum.agent.linux.iptables_firewall.IptablesFirewallDriver

  
  ---

  [root@wpc0051 etc]# cat /etc/quantum/dhcp_agent.ini | grep -v ^#| grep .
  [DEFAULT]
  interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver
  dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq

  
  --

  [root@wpc0051 etc]# cat /etc/quantum/quantum.conf 
  [DEFAULT]
  root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf
  debug = False
  verbose = False
  bind_host = 10.52.202.51
  bind_port = 9696
  core_plugin = 
quantum.plugins.linuxbridg

[Yahoo-eng-team] [Bug 1291605] [NEW] unit test test_create_instance_with_networks_disabled race fail

2014-03-12 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/22/43822/32/check/gate-nova-
python27/e33dc5b/console.html

message:"FAIL:
nova.tests.api.openstack.compute.plugins.v3.test_servers.ServersControllerCreateTest.test_create_instance_with_networks_disabled"
AND filename:"console.html" AND (build_name:"gate-nova-python27" OR
build_name:"gate-nova-python26")

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5hcGkub3BlbnN0YWNrLmNvbXB1dGUucGx1Z2lucy52My50ZXN0X3NlcnZlcnMuU2VydmVyc0NvbnRyb2xsZXJDcmVhdGVUZXN0LnRlc3RfY3JlYXRlX2luc3RhbmNlX3dpdGhfbmV0d29ya3NfZGlzYWJsZWRcIiBBTkQgZmlsZW5hbWU6XCJjb25zb2xlLmh0bWxcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLW5vdmEtcHl0aG9uMjdcIiBPUiBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI2XCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk0NjU3NTUwMjU1fQ==

12 hits in 7 days

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291605

Title:
  unit test test_create_instance_with_networks_disabled race fail

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/22/43822/32/check/gate-nova-
  python27/e33dc5b/console.html

  message:"FAIL:
  
nova.tests.api.openstack.compute.plugins.v3.test_servers.ServersControllerCreateTest.test_create_instance_with_networks_disabled"
  AND filename:"console.html" AND (build_name:"gate-nova-python27" OR
  build_name:"gate-nova-python26")

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5hcGkub3BlbnN0YWNrLmNvbXB1dGUucGx1Z2lucy52My50ZXN0X3NlcnZlcnMuU2VydmVyc0NvbnRyb2xsZXJDcmVhdGVUZXN0LnRlc3RfY3JlYXRlX2luc3RhbmNlX3dpdGhfbmV0d29ya3NfZGlzYWJsZWRcIiBBTkQgZmlsZW5hbWU6XCJjb25zb2xlLmh0bWxcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLW5vdmEtcHl0aG9uMjdcIiBPUiBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI2XCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk0NjU3NTUwMjU1fQ==

  12 hits in 7 days

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1228151] Re: Default security group defined by a tenant is not removed automatically on deleting the tenant

2014-03-12 Thread Mark McClain
Neutron like other OpenStack projects does not receive tenant delete
events from Keystone.  An outside process must handle the removal of old
tenant resources.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1228151

Title:
  Default security group defined by a tenant is not removed
  automatically on deleting the tenant

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Default security group is defined for each tenant when a tenant is created.
  After deleting the tenant the associated default security group is not 
cleaned up automatically. Eventually a cluster has scores of security group 
entities that have identical names and descriptions, which is "default" (can be 
observed by the admin user via quantum security-group-list CLI command) .

  1) Create a tenant
  $ quantum security-group-list
  +--+-+-+
  | id   | name| description |
  +--+-+-+
  | 0ad4971b-a232-439e-959a-79cfeb2210cb | default | default |
  | 37ecc8c3-85eb-4c16-ad71-689564324ccc | default | default |
  | 46dfed8b-610c-49d1-9e27-d55d1d20bd66 | default | default |
  +--+-+-+

  $ keystone tenant-create --name another_tenant
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 89d7ed4d02fe44c28d4218f3d258a4b5 |
  | name|  another_tenant  |
  +-+--+

  $ keystone tenant-list 
  +--++-+
  |id|name| enabled |
  +--++-+
  | d6e5537a5d0245b19d4bc4dc3307e497 |   admin|   True  |
  | e5c565d7d1f3405b8cd759ccba03b969 |  alt_demo  |   True  |
  | 89d7ed4d02fe44c28d4218f3d258a4b5 |   another_tenant   |   True  |
  | f55d9cbde6194a18a2f1ebbb2afd9457 |demo|   True  |
  | 64d3667ae3454c6bb7f43d8bef1179df | invisible_to_admin |   True  |
  | 12f3482c24a04a2fab177562d85f4a73 |  service   |   True  |
  +--++-+

  2) Associate a user with the tenant and authenticate under the tenant
  $ keystone user-role-add --user demo --role Member --tenant 
89d7ed4d02fe44c28d4218f3d258a4b5
  $ nova --os-tenant-name another_tenant --os-username demo --os-password user 
list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+

  3) Delete the tenant
  $ keystone tenant-delete 89d7ed4d02fe44c28d4218f3d258a4b5
  $ keystone tenant-list
  +--++-+
  |id|name| enabled |
  +--++-+
  | d6e5537a5d0245b19d4bc4dc3307e497 |   admin|   True  |
  | e5c565d7d1f3405b8cd759ccba03b969 |  alt_demo  |   True  |
  | f55d9cbde6194a18a2f1ebbb2afd9457 |demo|   True  |
  | 64d3667ae3454c6bb7f43d8bef1179df | invisible_to_admin |   True  |
  | 12f3482c24a04a2fab177562d85f4a73 |  service   |   True  |
  +--++-+

  4) The tenant defined default security group is not deleted
  $ quantum security-group-list
  +--+-+-+
  | id   | name| description |
  +--+-+-+
  | 0ad4971b-a232-439e-959a-79cfeb2210cb | default | default |
  | 37ecc8c3-85eb-4c16-ad71-689564324ccc | default | default |
  | 46dfed8b-610c-49d1-9e27-d55d1d20bd66 | default | default |
  | c7b5b103-69b3-4753-9370-d607a31474a7 | default | default |
  +--+-+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1228151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1228313] Re: Multiple tap interfaces on controller have overlapping tags

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1228313

Title:
  Multiple tap interfaces on controller have overlapping tags

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Host: Ubuntu
  Release: Grizzly w/ Quantum

  Multiple problems reported, including instances no longer receiving
  IPs via DHCP. My troubleshooting usually involves confirming
  connectivity from the namespaces and checking OVS, so I logged into
  the controller to find the following tap interfaces had overlapping
  tags:

  Port "tap78c4dd08-ad"
  tag: 1
  Interface "tap78c4dd08-ad"
  type: internal

  Port "tapa827f51e-be"
  tag: 1
  Interface "tapa827f51e-be"
  type: internal

  Port "tap5ec14dfb-56"
  tag: 1
  Interface "tap5ec14dfb-56"
  type: internal

  There were approximately 8 provider networks configured, and these
  taps corresponded to 3 different namespaces on the controller. The
  other taps had unique tags (as expected). Pinging from each namespace
  revealed only one of the three namespaces to be working properly.
  Restarting the 'openvswitch-switch' service renumbered the tags and
  restored connectivity from all namespaces. Looking back I should have
  checked the ovs flows to see what the rules looked like, but I was in
  a hurry to get things working.

  The user of the system is in the process of testing their environment,
  which includes constanting creating networks/subnets/instances,
  removing them, and recreating them via API.

  I don't have any additional information to provide, but am curious to
  know how we might be able to recreate a condition that would cause
  overlapping tags such as this. Plan to check the flows the next time
  this happens to confirm the theory of duplicate/overlapping rules w/
  incorrect vlan rewrites.

  Thanks- JD

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1228313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251541] Re: neutron rabbitmq cluster is not working

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251541

Title:
  neutron rabbitmq cluster is not working

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  when i set "rabbitMQ clusters" in quantum.conf  like

  rabbit_hosts=server1:5672,server2:5672,server3:5672
  rabbit_ha_queues=True

  if i kill one of the rabbit server like server1 or server2 
  quantum api server make error message "lost connection to server1"

  and it don't progress any more.

  after i start the killed server and restart neutron API server
  it starts to interact with MQs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258375] Re: only one subnet_id is allowed behind a router for vpnservice object

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
 Assignee: yong sheng gong (gongysh) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258375

Title:
  only one subnet_id is allowed behind a router for vpnservice object

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I think we should allow more than subnet_id in one vpnservice object.
  but the model below limits only one subnet_id is used.
  https://github.com/openstack/neutron/blob/master/neutron/extensions/vpnaas.py
  RESOURCE_ATTRIBUTE_MAP = {

  'vpnservices': {
  'id': {'allow_post': False, 'allow_put': False,
 'validate': {'type:uuid': None},
 'is_visible': True,
 'primary_key': True},
  'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
  'name': {'allow_post': True, 'allow_put': True,
   'validate': {'type:string': None},
   'is_visible': True, 'default': ''},
  'description': {'allow_post': True, 'allow_put': True,
  'validate': {'type:string': None},
  'is_visible': True, 'default': ''},
  'subnet_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'router_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'admin_state_up': {'allow_post': True, 'allow_put': True,
 'default': True,
 'convert_to': attr.convert_to_boolean,
 'is_visible': True},
  'status': {'allow_post': False, 'allow_put': False,
 'is_visible': True}
  },

  with such limit, I don't think there is a way to allow other subnets
  behind the router be vpn exposed!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258507] Re: Linux Bridge agent does not start in debian

2014-03-12 Thread Mark McClain
Launchpad is used to track bugs in the upstream development.  You'll
need to file a bug with the package debian maintainer.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258507

Title:
  Linux Bridge agent does not start in debian

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Right now the DAEMON_ARGS parameter in the init.d script is named
  CONFIG_FILE and is formatted wrong for where it is, which means that
  the daemon doesn't start at all for debian 7 from your repositories.
  The daemon needs the config file to be prepended, and should also
  probably be provided with a log file.

  I apologise in advance if I'm filing this bug at the wrong place. :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258952] Re: tox failues due to out of memory

2014-03-12 Thread Mark McClain
This will be addressed via a refactoring of unit tests during Juno.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258952

Title:
  tox failues due to out of memory

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  I'm running tox -e py27 under Ubuntu 12.04 desktop 64 bit in a VM on
  my laptop with 4 GB or RAM allocated. The memory in the run creeps up
  to 4GB and then near the end, it drops back down to about 2GB and in
  the end, I get numerous tox failures, many indicating out of memory
  and too many open files:

  ...
  ==
  FAIL: 
neutron.tests.unit.test_agent_linux_utils.AgentUtilsExecuteTest.test_without_helper
  tags: worker-0
  --
  Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/opt/stack/neutron/neutron/tests/unit/test_agent_linux_utils.py", 
line 39, in test_without_helper
  result = utils.execute(["ls", self.test_file])
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 65, in execute
  addl_env=addl_env)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 56, in 
create_process
  env=env)
File "/opt/stack/neutron/neutron/common/utils.py", line 125, in 
subprocess_popen
  close_fds=True, env=env)
File 
"/opt/stack/neutron/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 44, in __init__
  subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
  errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1143, in _execute_child
  self.pid = os.fork()
  OSError: [Errno 12] Cannot allocate memory
  ==
  FAIL: process-returncode
  tags: worker-0
  --
  Binary content:
traceback (test/plain; charset="utf8")
  ==
  FAIL: 
neutron.tests.unit.test_agent_linux_utils.AgentUtilsExecuteTest.test_check_exit_code
  tags: worker-1
  --
  Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/opt/stack/neutron/neutron/tests/unit/test_agent_linux_utils.py", 
line 59, in test_check_exit_code
  check_exit_code=False)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 65, in execute
  addl_env=addl_env)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 56, in 
create_process
  env=env)
File "/opt/stack/neutron/neutron/common/utils.py", line 125, in 
subprocess_popen
  close_fds=True, env=env)
File 
"/opt/stack/neutron/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 44, in __init__
  subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
File "/usr/lib/python2.7/subprocess.py", line 672, in __init__
  errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/usr/lib/python2.7/subprocess.py", line 1038, in _get_handles
  p2cread, p2cwrite = self.pipe_cloexec()
File "/usr/lib/python2.7/subprocess.py", line 1091, in pipe_cloexec
  r, w = os.pipe()
  OSError: [Errno 24] Too many open files
  ==
  ...

  Ran 9290 (+1703) tests in 2704.305s (+299.051s)
  FAILED (id=1, failures=194 (+185), skips=324)

  Here is the version info:

  cm@vpn-a[2853] $ git log -n 1
  commit 3014e1e021b3fe59c75daae1734472c3a11582ee
  Merge: e22626e 5652e20
  Author: Jenkins 
  Date:   Sat Dec 7 23:05:30 2013 +

  Merge "Preserve floating ips when initializing l3 gateway
  interface"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258430] Re: DHCP Agent does not use /etc/resolv.conf settings when no dnsmasq_dns_server is set in configuration

2014-03-12 Thread Mark McClain
Sorry has already been corrected via another change.

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258430

Title:
  DHCP Agent does not use /etc/resolv.conf settings when no
  dnsmasq_dns_server is set in configuration

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi,

  Configuration file and code says that Neutron DHCP agent must use DNS 
settings from /etc/resolv.conf :
  dhcp_agent.ini:
  # Use another DNS server before any in /etc/resolv.conf.
  # dnsmasq_dns_server =

  neutron/agent/linux/dhcp.py:
  cfg.StrOpt('dnsmasq_dns_server',
 help=_('Use another DNS server before any in '
'/etc/resolv.conf.')),

  Reality is that dnsmasq is launched with "--no-resolv" option:
  neutron/agent/linux/dhcp.py:
  ..
 def spawn_process(self):
  """Spawns a Dnsmasq process for the network."""
  env = {
  self.NEUTRON_NETWORK_ID_KEY: self.network.id,
  }

  cmd = [
  'dnsmasq',
  '--no-hosts',
  '--no-resolv',
  '--strict-order',
  '--bind-interfaces',
  '--interface=%s' % self.interface_name,
  '--except-interface=lo',
  '--pid-file=%s' % self.get_conf_file_name(
  'pid', ensure_conf_dir=True),
  '--dhcp-hostsfile=%s' % self._output_hosts_file(),
  '--dhcp-optsfile=%s' % self._output_opts_file(),
  '--leasefile-ro',
  ]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291100] Re: UnsupportedVersion: Endpoint does not support RPC version 3.22

2014-03-12 Thread Bhuvan Arumugam
i couldn't replicate this issue. I was using latest HEAD across all
components.

I think restarting all services had helped.

I'll reopen, if it happen again.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291100

Title:
  UnsupportedVersion: Endpoint does not support RPC version 3.22

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  compute log spit this error message. We are unable to delete VMs. They
  are struck at task state "deleting".

  2014-03-11 22:24:32,321 (oslo.messaging.rpc.dispatcher): ERROR dispatcher 
_dispatch_and_reply Exception during message handling: Endpoint does not 
support RPC version 3.22
  Traceback (most recent call last):
File 
"/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  incoming.message))
File 
"/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 185, in _dispatch
  raise UnsupportedVersion(version)
  UnsupportedVersion: Endpoint does not support RPC version 3.22
  2014-03-11 22:24:32,322 (oslo.messaging._drivers.common): ERROR common 
serialize_remote_exception Returning exception Endpoint does not support RPC 
version 3.22 to caller

  It is likely the regression caused by
  
https://github.isg.apple.com/openstack/nova/commit/a7b5b975a48f132afa0fc8717c72ab3cb1f6545a#nova/compute/rpcapi.py,
  wherein the RPC version was bumped to 3.23.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260902] Re: improve load distribution on rabbitmq servers

2014-03-12 Thread Mark McClain
** Also affects: oslo
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260902

Title:
  improve load distribution on rabbitmq servers

Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Currently, neutron service reconnects to rabbit server when it detects a 
connection failure. It goes blindly picking up the first rabbit server 
configured always, try and move onto next if the connection attempt fails. 
  It would never use second rabbit server if first one succeeds. 

  Instead, we should distribute the connection load onto available
  rabbit servers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo/+bug/1260902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261469] Re: Plugin switch issues undocumented

2014-03-12 Thread Mark McClain
Note bug was originally filed on Neutron.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261469

Title:
  Plugin switch issues undocumented

Status in OpenStack Manuals:
  New

Bug description:
  Switching plugins may currently it seems lead to issues even though
  the configuration is correct. The docs should therefore clearly state
  that such issues might occur when moving from one plugin to another to
  save customers headaches in trying to figure out the issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1261469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261510] Re: Instance fails to spawn in tempest tests

2014-03-12 Thread Mark McClain
This is has only shown up in the stable/havana queue and is now know to
be caused by kernel fault.

** Changed in: nova
   Status: New => Invalid

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Mark McClain (markmcclain)

** Changed in: neutron
Milestone: None => 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261510

Title:
  Instance fails to spawn in tempest tests

Status in OpenStack Neutron (virtual network service):
  Triaged
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This happened only 3 times in the past 12 hours, so nothing to worry
  about so far.

  Logstash query for the exact failure in [1] available at [2]
  I am also seeing more "Timeout waiting for thing" errors (not the same 
condition as bug 1254890, which affects the large_ops job and is due to 
nova/neutron chatty interface). Logstash query for this at [3] (13 hits in past 
12 hours). I think they might have the same root cause.

  
  [1] 
http://logs.openstack.org/22/62322/2/check/check-tempest-dsvm-neutron-isolated/cce7146
  [2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzXCIgQU5EICBcIkN1cnJlbnQgc3RhdHVzOiBCVUlMRFwiIEFORCBcIkN1cnJlbnQgdGFzayBzdGF0ZTogc3Bhd25pbmdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg3MjIzNzQ0Mjk2fQ==
  [3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4NzIyMzg2Mjg1MH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261674] Re: neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_daemon_loop FAIL

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261674

Title:
  
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_daemon_loop
  FAIL

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  gate-neutron-python27 failes for patch not related to this  test. 
  The patch is : 
  https://review.openstack.org/#/c/53609/

  The log is:
  ft1.6626: 
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_daemon_loop_StringException:
 Empty attachments:
stdout

  pythonlogging:'': {{{
  2013-12-17 06:06:10,979 INFO 
[neutron.plugins.openvswitch.agent.ovs_neutron_agent] Mapping physical network 
net1 to bridge tunnel_bridge_mapping
  2013-12-17 06:06:10,986 INFO 
[neutron.plugins.openvswitch.agent.ovs_neutron_agent] Agent out of sync with 
plugin!
  2013-12-17 06:06:10,986 INFO 
[neutron.plugins.openvswitch.agent.ovs_neutron_agent] Agent tunnel out of sync 
with plugin!
  2013-12-17 06:06:12,986 INFO 
[neutron.plugins.openvswitch.agent.ovs_neutron_agent] Agent tunnel out of sync 
with plugin!
  }}}

  stderr: {{{
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in fire_timers
  timer()
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 56, in __call__
  cb(*args, **kw)
File "neutron/openstack/common/loopingcall.py", line 91, in _inner
  LOG.exception(_('in fixed duration looping call'))
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 955, in __call__
  return _mock_self._mock_call(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1010, in _mock_call
  raise effect
  Exception: Fake exception to get out of the loop
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in fire_timers
  timer()
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 56, in __call__
  cb(*args, **kw)
File "neutron/openstack/common/loopingcall.py", line 91, in _inner
  LOG.exception(_('in fixed duration looping call'))
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 955, in __call__
  return _mock_self._mock_call(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1010, in _mock_call
  raise effect
  Exception: Fake exception to get out of the loop
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in fire_timers
  timer()
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 56, in __call__
  cb(*args, **kw)
File "neutron/openstack/common/loopingcall.py", line 91, in _inner
  LOG.exception(_('in fixed duration looping call'))
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 955, in __call__
  return _mock_self._mock_call(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1010, in _mock_call
  raise effect
  Exception: Fake exception to get out of the loop
  }}}

  Traceback (most recent call last):
File "neutron/tests/unit/openvswitch/test_ovs_tunnel.py", line 534, in 
test_daemon_loop
  log_exception.assert_called_once_with("Error in agent event loop")
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 845, in assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected to be called once. Called 7 times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263446] Re: RPC Dispather implementation error

2014-03-12 Thread Mark McClain
Any logic errors should be fixed in the Oslo messaging project.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263446

Title:
  RPC Dispather implementation error

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  release: Havana
  file: neutron/openstack/common/rpc/dispather.py
  Code:
  def dispatch(self, ctxt, version, method, namespace, **kwargs):
  ..
  for proxyobj in self.callbacks:
  ..
  if is_compatible:
  kwargs = self._deserialize_args(ctxt, kwargs)
  result = getattr(proxyobj, method)(ctxt, **kwargs)
  return self.serializer.serialize_entity(ctxt, result)
  ..

  Method dispatch will return after call the specified method of the
  FIRST proxyobj in self.callbacks. Currently, this code is harmless,
  because there is only one ProxyObject in self.callbacks, but the logic
  is not right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1153280] Re: On quantum gate VM cannot get dhcp lease

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1153280

Title:
  On quantum gate VM cannot get dhcp lease

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I'm working on enabling ssh tests back in tempest
  https://review.openstack.org/#/c/22415/

  The test fails in the quantum gate. I did some digging, and it seems to me 
that the quantum setup in the devstack gate is broken.
  If I boot a VM I can see from the console log that the VM is failing to get a 
DHCP lease, which causes in turn metadata to fail, and VM networking to be 
broken.

  Starting network...
  udhcpc (v1.20.1) started
  Sending discover...
  Sending discover...
  Sending discover...
  No lease, failing
  WARN: /etc/rc3.d/S40network failed
  cirrosds 'net' up at 195.26
  checking http://169.254.169.254/20090404/instanceid
  failed 1/20: up 196.16. request failed
  failed 2/20: up 198.79. request failed

  jenkins@wip-devstack-1362643791:~/workspace$ nova list
  
+--+---++---+
  | ID   | Name  | Status | Networks
  |
  
+--+---++---+
  | 4741860b-afa8-4e70-935b-d3b4624796e1 | quantumvm0001 | ACTIVE | 
nova=172.24.4.228 |
  
+--+---++---+

  This is the localrc generated by gate scripts:
  $ cat /opt/stack/new/devstack/localrc 
  Q_USE_DEBUG_COMMAND=True
  NETWORK_GATEWAY=10.1.0.1
  Q_USE_DEBUG_COMMAND=True
  NETWORK_GATEWAY=10.1.0.1
  DEST=/opt/stack/new
  ACTIVE_TIMEOUT=90
  BOOT_TIMEOUT=90
  ASSOCIATE_TIMEOUT=60
  TERMINATE_TIMEOUT=60
  MYSQL_PASSWORD=secret
  DATABASE_PASSWORD=secret
  RABBIT_PASSWORD=secret
  ADMIN_PASSWORD=secret
  SERVICE_PASSWORD=secret
  SERVICE_TOKEN=111222333444
  SWIFT_HASH=1234123412341234
  ROOTSLEEP=0
  ERROR_ON_CLONE=True
  
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,swift,cinder,c-api,c-vol,c-sch,n-cond,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
  SKIP_EXERCISES=boot_from_volume,client-env
  SERVICE_HOST=127.0.0.1
  # Screen console logs will capture service logs.
  SYSLOG=False
  SCREEN_LOGDIR=/opt/stack/new/screen-logs
  LOGFILE=/opt/stack/new/devstacklog.txt
  VERBOSE=True
  FIXED_RANGE=10.1.0.0/24
  FIXED_NETWORK_SIZE=256
  VIRT_DRIVER=libvirt
  SWIFT_REPLICAS=1
  SCREEN_DEV=False
  LOG_COLOR=False
  export OS_NO_CACHE=True
  CINDER_SECURE_DELETE=False
  API_RATE_LIMIT=False
  VOLUME_BACKING_FILE_SIZE=10G

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1153280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259937] Re: RouterDNatDisabled is not used at all

2014-03-12 Thread Mark McClain
** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
Milestone: icehouse-rc1 => None

** Changed in: neutron
 Assignee: yong sheng gong (gongysh) => (unassigned)

** Changed in: neutron
   Importance: Low => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1259937

Title:
  RouterDNatDisabled is not used at all

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  class RouterDNatDisabled seems not be used at all:
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/l3_ext_gw_mode.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1259937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 980037] Re: Service managers starting keystone-all don't know when its ready

2014-03-12 Thread Mark McClain
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/980037

Title:
  Service managers starting keystone-all don't know when its ready

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  If starting keystone-all with a Service Manager (systemd for example),
  keystone has no way of reporting back to systemd that it is ready to
  serve http requests and as a result its possible for systemd to return
  before keystone is ready.

  For example on Fedora 
  where the systemd  process start-up type is set to simple (i.e. just start 
the process and return)

  > /bin/systemctl stop openstack-keystone.service ; /bin/systemctl start 
openstack-keystone.service ; /usr/bin/keystone --token keystone_admin_token 
--endpoint http://127.0.0.1:35357/v2.0/ service-list
  Unable to communicate with identity service: 'NoneType' object has no 
attribute 'makefile'. (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/980037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279683] Re: The problem generated by deleting dhcp'port by mistack

2014-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/74228
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=c3d73c82d38a982d2463e7ec9929dad02e2b0550
Submitter: Jenkins
Branch:master

commit c3d73c82d38a982d2463e7ec9929dad02e2b0550
Author: shihanzhang 
Date:   Tue Feb 18 10:27:02 2014 +0800

Fix problem of deleting dhcp port

when network is deleted, the dhcp port in that network will be deleted
automaticly, so '_cleanup_ports' should not clean dhcp port. I already
have a patch to fix the problem of deleting dhcp port.

Change-Id: I92f90a3dad6d76954d466e4b30ab7c46434ba7df
Closes-bug: #1279683


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279683

Title:
  The problem generated by deleting dhcp'port by mistack

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Tempest:
  Fix Released

Bug description:
  In my environment, I delete the dhcp port by mistack, then I found the vm in 
primary subnet can not get IP, the reason is that when a dhcp port is deleted, 
neutron will create the dhcp port automaticly, but the VIF TAP will not be 
deleted, you will see one IP in two VIF TAP
   
  root@shz-dev:~# ip netns exec qdhcp-ab049276-3b7c-41a2-b62c-3f587a02b0a6 
ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:576 (576.0 B)  TX bytes:576 (576.0 B)

  tap4694b3c4-a6 Link encap:Ethernet  HWaddr fa:16:3e:41:f6:fc  
inet addr:50.50.50.2  Bcast:50.50.50.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe41:f6fc/64 Scope:Link
UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:796 (796.0 B)

  tapa546a666-31 Link encap:Ethernet  HWaddr fa:16:3e:98:dd:a7  
inet addr:50.50.50.2  Bcast:50.50.50.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe98:dda7/64 Scope:Link
UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:496 (496.0 B)

  Even if the problem is caused by error operation, I think the dhcp
  port should not allow be deleted, the port on router can not be
  deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291569] [NEW] "Default" naming for default security group can confuse end users

2014-03-12 Thread Liz Blanchard
Public bug reported:

It would be great to rename the "Default" security group to be "Default
security group" so that when users click on this name as a link in the
table it will be clear that this is a security group. Also, it will help
when they are selecting this security group from the list in the Launch
Instance modal.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291569

Title:
  "Default" naming for default security group can confuse end users

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It would be great to rename the "Default" security group to be
  "Default security group" so that when users click on this name as a
  link in the table it will be clear that this is a security group.
  Also, it will help when they are selecting this security group from
  the list in the Launch Instance modal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291570] [NEW] "Default" naming for default security group can confuse end users

2014-03-12 Thread Liz Blanchard
Public bug reported:

It would be great to rename the "Default" security group to be "Default
security group" so that when users click on this name as a link in the
table it will be clear that this is a security group. Also, it will help
when they are selecting this security group from the list in the Launch
Instance modal.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291570

Title:
  "Default" naming for default security group can confuse end users

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It would be great to rename the "Default" security group to be
  "Default security group" so that when users click on this name as a
  link in the table it will be clear that this is a security group.
  Also, it will help when they are selecting this security group from
  the list in the Launch Instance modal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291567] [NEW] TabGroups don't handle bad GET params

2014-03-12 Thread Matthew D. Wood
Public bug reported:

TabGroups and thus TabViews and TabbedTableViews don't handle bad GET
params well.

We found this during penetration testing:
project/volumes/?tab=

[Yahoo-eng-team] [Bug 1291565] [NEW] validate_networks does unnecessary querying to neutron in some cases

2014-03-12 Thread Aaron Rosen
Public bug reported:

This patch optimizes validate_networks so that it only queries neutron
when needed. Previously, this method would perform an additional net_list,
list_ports, and show_quota regardless if a request contains only
port_ids. If a request only contains port ids we do not need to check neutron
for quota as these ports are already allocated.

** Affects: nova
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291565

Title:
  validate_networks does unnecessary querying to neutron in some cases

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  This patch optimizes validate_networks so that it only queries neutron
  when needed. Previously, this method would perform an additional net_list,
  list_ports, and show_quota regardless if a request contains only
  port_ids. If a request only contains port ids we do not need to check neutron
  for quota as these ports are already allocated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291535] Re: 'Unable to retrieve OVS kernel module version' when _not_ using DKMS openvswitch module

2014-03-12 Thread James Page
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Trusty)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Trusty)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291535

Title:
  'Unable to retrieve OVS kernel module version' when _not_ using DKMS
  openvswitch module

Status in OpenStack Neutron (virtual network service):
  New
Status in “neutron” package in Ubuntu:
  Triaged
Status in “neutron” source package in Trusty:
  Triaged

Bug description:
  If we are using openvswitch in a system with a newer kernel
  (3.13/trusty) it should have the features required for neutron and not
  require an openvswitch dkms package. Therefore we should be able to
  use the native module.

  In neutron/agent/linux/ovs_lib.py:

  def get_installed_ovs_klm_version():
  args = ["modinfo", "openvswitch"]
  try:
  cmd = utils.execute(args)
  for line in cmd.split('\n'):
  if 'version: ' in line and not 'srcversion' in line:
  ver = re.findall("\d+\.\d+", line)
  return ver[0]
  except Exception:
  LOG.exception(_("Unable to retrieve OVS kernel module version."))

  So if we run modinfo on a system without a DKMS package we get:
  $ modinfo openvswitch
  filename:   
/lib/modules/3.13.0-16-generic/kernel/net/openvswitch/openvswitch.ko
  license:GPL
  description:Open vSwitch switching datapath
  srcversion: 1CEE031973F0E4024ACC848
  depends:libcrc32c,vxlan,gre
  intree: Y
  vermagic:   3.13.0-16-generic SMP mod_unload modversions 
  signer: Magrathea: Glacier signing key
  sig_key:1A:EE:D8:17:C4:D5:29:55:C4:FA:C3:3A:02:37:FE:0A:93:44:6D:69
  sig_hashalgo:   sha512

  Because 'version' isn't provided we need an alternative way of
  checking if the openvswitch module has the required features.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291548] [NEW] verbose_name is unnecessary for hidden field

2014-03-12 Thread Akihiro Motoki
Public bug reported:

verbose_name is unnecessary for hidden field.
Some verbose_name does not have much meaning (it is just a mirror of form field 
name) but it is marked as translated and such strings are difficult to 
translate. One I encountered is in 
openstack_dashboard/dashboards/project/containers/tables.py.

metadata_loaded = tables.Column(get_metadata_loaded,
verbose_name=_("Metadata Loaded"),
status=True,
status_choices=METADATA_LOADED_CHOICES,
hidden=True)

metadata_loaded is difficult to translate and there is no need to
translate it too because it is invisible in a browser.

I would suggest to delete verbose_name from hidden field.
According to my check, we have only four hidden fields with verbose_name.
it can be easy to clean up.

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291548

Title:
  verbose_name is unnecessary for hidden field

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  verbose_name is unnecessary for hidden field.
  Some verbose_name does not have much meaning (it is just a mirror of form 
field name) but it is marked as translated and such strings are difficult to 
translate. One I encountered is in 
openstack_dashboard/dashboards/project/containers/tables.py.

  metadata_loaded = tables.Column(get_metadata_loaded,
  verbose_name=_("Metadata Loaded"),
  status=True,
  status_choices=METADATA_LOADED_CHOICES,
  hidden=True)

  metadata_loaded is difficult to translate and there is no need to
  translate it too because it is invisible in a browser.

  I would suggest to delete verbose_name from hidden field.
  According to my check, we have only four hidden fields with verbose_name.
  it can be easy to clean up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291535] [NEW] 'Unable to retrieve OVS kernel module version' when _not_ using DKMS openvswitch module

2014-03-12 Thread Chris J Arges
Public bug reported:

If we are using openvswitch in a system with a newer kernel
(3.13/trusty) it should have the features required for neutron and not
require an openvswitch dkms package. Therefore we should be able to use
the native module.

In neutron/agent/linux/ovs_lib.py:

def get_installed_ovs_klm_version():
args = ["modinfo", "openvswitch"]
try:
cmd = utils.execute(args)
for line in cmd.split('\n'):
if 'version: ' in line and not 'srcversion' in line:
ver = re.findall("\d+\.\d+", line)
return ver[0]
except Exception:
LOG.exception(_("Unable to retrieve OVS kernel module version."))

So if we run modinfo on a system without a DKMS package we get:
$ modinfo openvswitch
filename:   
/lib/modules/3.13.0-16-generic/kernel/net/openvswitch/openvswitch.ko
license:GPL
description:Open vSwitch switching datapath
srcversion: 1CEE031973F0E4024ACC848
depends:libcrc32c,vxlan,gre
intree: Y
vermagic:   3.13.0-16-generic SMP mod_unload modversions 
signer: Magrathea: Glacier signing key
sig_key:1A:EE:D8:17:C4:D5:29:55:C4:FA:C3:3A:02:37:FE:0A:93:44:6D:69
sig_hashalgo:   sha512

Because 'version' isn't provided we need an alternative way of checking
if the openvswitch module has the required features.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291535

Title:
  'Unable to retrieve OVS kernel module version' when _not_ using DKMS
  openvswitch module

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If we are using openvswitch in a system with a newer kernel
  (3.13/trusty) it should have the features required for neutron and not
  require an openvswitch dkms package. Therefore we should be able to
  use the native module.

  In neutron/agent/linux/ovs_lib.py:

  def get_installed_ovs_klm_version():
  args = ["modinfo", "openvswitch"]
  try:
  cmd = utils.execute(args)
  for line in cmd.split('\n'):
  if 'version: ' in line and not 'srcversion' in line:
  ver = re.findall("\d+\.\d+", line)
  return ver[0]
  except Exception:
  LOG.exception(_("Unable to retrieve OVS kernel module version."))

  So if we run modinfo on a system without a DKMS package we get:
  $ modinfo openvswitch
  filename:   
/lib/modules/3.13.0-16-generic/kernel/net/openvswitch/openvswitch.ko
  license:GPL
  description:Open vSwitch switching datapath
  srcversion: 1CEE031973F0E4024ACC848
  depends:libcrc32c,vxlan,gre
  intree: Y
  vermagic:   3.13.0-16-generic SMP mod_unload modversions 
  signer: Magrathea: Glacier signing key
  sig_key:1A:EE:D8:17:C4:D5:29:55:C4:FA:C3:3A:02:37:FE:0A:93:44:6D:69
  sig_hashalgo:   sha512

  Because 'version' isn't provided we need an alternative way of
  checking if the openvswitch module has the required features.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291529] [NEW] Help text improvement in "Retrieve Password": mention private key rather than key pair

2014-03-12 Thread Akihiro Motoki
Public bug reported:

openstack_dashboard/dashboards/project/instances/templates/instances/_decryptpassword.html

A private key is used to decrypt a password.
It is better to update the message to mention "private key" needs to be 
specified.

-{% trans "To decrypt your password you will need your key pair for this 
instance. Select your key pair file, or cop
y and paste the content of your private key file into the text area below, then 
click Decrypt Password."%}
+{% trans "To decrypt your password you will need the private key of 
your key pair for this instance. Select the pri
vate key file, or copy and paste the content of your private key file into the 
text area below, then click Decrypt Password
."%}

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291529

Title:
  Help text improvement in "Retrieve Password": mention private key
  rather than key pair

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
openstack_dashboard/dashboards/project/instances/templates/instances/_decryptpassword.html

  A private key is used to decrypt a password.
  It is better to update the message to mention "private key" needs to be 
specified.

  -{% trans "To decrypt your password you will need your key pair for 
this instance. Select your key pair file, or cop
  y and paste the content of your private key file into the text area below, 
then click Decrypt Password."%}
  +{% trans "To decrypt your password you will need the private key of 
your key pair for this instance. Select the pri
  vate key file, or copy and paste the content of your private key file into 
the text area below, then click Decrypt Password
  ."%}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291524] [NEW] "The volume size cannot be less than the volume size (%sGB)" is hard to understand

2014-03-12 Thread Akihiro Motoki
Public bug reported:

The error message "The volume size cannot be less than the volume size
(%sGB)" from the volume creation form is hard to understand because it
has two "the volume size". The former one is a volume to be created and
the latter is a source volume.

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291524

Title:
  "The volume size cannot be less than the volume size (%sGB)" is hard
  to understand

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The error message "The volume size cannot be less than the volume size
  (%sGB)" from the volume creation form is hard to understand because it
  has two "the volume size". The former one is a volume to be created
  and the latter is a source volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291522] [NEW] Some strings in routerrules template are not translatable

2014-03-12 Thread Akihiro Motoki
Public bug reported:

In 
openstack_dashboard/dashboards/project/routers/templates/routers/extensions/routerrules/grid.html,
"Conflicting Rule" and "Description" are not translatable.

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291522

Title:
  Some strings in routerrules template are not translatable

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In 
openstack_dashboard/dashboards/project/routers/templates/routers/extensions/routerrules/grid.html,
  "Conflicting Rule" and "Description" are not translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291515] Re: Recent Change to default state_path can silently break existing systems

2014-03-12 Thread Matt Riedemann
Adding cinder since the same change was made there:
https://review.openstack.org/#/c/52372/

** Changed in: nova
Milestone: None => icehouse-rc1

** Changed in: nova
   Importance: Undecided => High

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291515

Title:
  Recent Change to default state_path can silently  break existing
  systems

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  the change to the default value of state_path introduced by
  I94502bcfac8b372271acd0dbc1710c0e3009b8e1 for the reasons set out
  in my -1 review of the same that seems to have been skipped when the
  change was accepted.

  As implemented the change will break any existing systems that are using
  the default value of state_path with no warning period, which goes beyond
  the scope of change for UpgradeImpact

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1291515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291512] [NEW] Swift functional test "test_delayed_delete_with_auth" is failing

2014-03-12 Thread Stuart McLaren
Public bug reported:

If the user running the test can't write to /var/lib/glance the test
fails:

==
FAIL: 
glance.tests.functional.store.test_swift.TestSwiftStore.test_delayed_delete_with_auth
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File "glance/tests/functional/store/test_swift.py", line 533, in 
test_delayed_delete_with_auth
image_id)
  File "glance/store/__init__.py", line 314, in 
schedule_delayed_delete_from_backend
(file_queue, _db_queue) = scrubber.get_scrub_queues()
  File "glance/scrubber.py", line 354, in get_scrub_queues
_file_queue = ScrubFileQueue()
  File "glance/scrubber.py", line 110, in __init__
utils.safe_mkdirs(self.scrubber_datadir)
  File "glance/common/utils.py", line 283, in safe_mkdirs
os.makedirs(path)
  File "/home/ubuntu/git/glance/.venv/lib/python2.7/os.py", line 150, in 
makedirs
makedirs(head, mode)
  File "/home/ubuntu/git/glance/.venv/lib/python2.7/os.py", line 157, in 
makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/var/lib/glance'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1291512

Title:
  Swift functional test "test_delayed_delete_with_auth" is failing

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  If the user running the test can't write to /var/lib/glance the test
  fails:

  ==
  FAIL: 
glance.tests.functional.store.test_swift.TestSwiftStore.test_delayed_delete_with_auth
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File "glance/tests/functional/store/test_swift.py", line 533, in 
test_delayed_delete_with_auth
  image_id)
File "glance/store/__init__.py", line 314, in 
schedule_delayed_delete_from_backend
  (file_queue, _db_queue) = scrubber.get_scrub_queues()
File "glance/scrubber.py", line 354, in get_scrub_queues
  _file_queue = ScrubFileQueue()
File "glance/scrubber.py", line 110, in __init__
  utils.safe_mkdirs(self.scrubber_datadir)
File "glance/common/utils.py", line 283, in safe_mkdirs
  os.makedirs(path)
File "/home/ubuntu/git/glance/.venv/lib/python2.7/os.py", line 150, in 
makedirs
  makedirs(head, mode)
File "/home/ubuntu/git/glance/.venv/lib/python2.7/os.py", line 157, in 
makedirs
  mkdir(name, mode)
  OSError: [Errno 13] Permission denied: '/var/lib/glance'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1291512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291513] [NEW] Translator cannot control the word order of "update the extra spec value for"

2014-03-12 Thread Akihiro Motoki
Public bug reported:

In 
openstack_dashboard/dashboards/admin/flavors/templates/flavors/extras/_edit.html,
"Update the "extra spec" value for" and "key" variable are concatenated so 
translators cannot control the word order.

We should use blockquote.

-{% trans 'Update the "extra spec" value for' %} "{{ key 
}}"
+{% blocktrans %}Update the "extra spec" value for "{{ key 
}}"{% endblocktrans %}

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291513

Title:
  Translator cannot control the word order of "update the extra spec
  value for"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In 
openstack_dashboard/dashboards/admin/flavors/templates/flavors/extras/_edit.html,
  "Update the "extra spec" value for" and "key" variable are concatenated so 
translators cannot control the word order.

  We should use blockquote.

  -{% trans 'Update the "extra spec" value for' %} "{{ key 
}}"
  +{% blocktrans %}Update the "extra spec" value for "{{ key 
}}"{% endblocktrans %}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291515] [NEW] Recent Change to default state_path can silently break existing systems

2014-03-12 Thread Phil Day
Public bug reported:

the change to the default value of state_path introduced by
I94502bcfac8b372271acd0dbc1710c0e3009b8e1 for the reasons set out
in my -1 review of the same that seems to have been skipped when the
change was accepted.

As implemented the change will break any existing systems that are using
the default value of state_path with no warning period, which goes beyond
the scope of change for UpgradeImpact

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: High
 Assignee: Phil Day (philip-day)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291515

Title:
  Recent Change to default state_path can silently  break existing
  systems

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  the change to the default value of state_path introduced by
  I94502bcfac8b372271acd0dbc1710c0e3009b8e1 for the reasons set out
  in my -1 review of the same that seems to have been skipped when the
  change was accepted.

  As implemented the change will break any existing systems that are using
  the default value of state_path with no warning period, which goes beyond
  the scope of change for UpgradeImpact

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1291515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291505] [NEW] strings in test plugin need not to be translatable

2014-03-12 Thread Akihiro Motoki
Public bug reported:

test_plugin is only used for test and there is no need strings in
test_plugin are translatable.

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291505

Title:
  strings in test plugin need not to be translatable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  test_plugin is only used for test and there is no need strings in
  test_plugin are translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291311] Re: Instance status is not updated when compute machine is disconnected

2014-03-12 Thread Leandro Ignacio Costantino
** Project changed: horizon => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291311

Title:
   Instance status is not updated when compute machine is disconnected

Status in OpenStack Compute (Nova):
  New

Bug description:
  Status of Instance ( VM ) is not updated when compute machine is
  disconnected from the set up.  Also, deletion of VM is not possible .

  Set up :-

  2 compute nodes ( Compute-1 and Compute-2)

  root@controller:~# nova list
  
+--+---+-++-+-+
  | ID   | Name  | Status  | Task State | 
Power State | Networks|
  
+--+---+-++-+-+
  | aa73d8e6-f7e9-47d1-a590-06e3c134d33b | THIRD_VM  | SHUTOFF | deleting   | 
Shutdown| INT_NET=50.50.1.2, 172.18.7.18
  | e809191d-fa66-45f6-84a7-6f1acfe460c8 | UBUNTU_VM | ACTIVE  | None   | 
Running | INT_NET=50.50.1.4, 172.18.7.20  |
  
+--+---+-++-+-+

  COMPUTE-1 hosting == UBUNTU_VM
  COMPUTE-2 hosting == THIRD_VM

  Disconnected  COMPUTE-2 from the set up ( power shut down) and then
  tried to delete THIRD_VM  but its showing  following error

  root@controller:~# nova delete aa73d8e6-f7e9-47d1-a590-06e3c134d33b

  The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-6931f510-033b-4c1d-a367-390a0a4421eb)
  ERROR: Unable to delete any of the specified servers.

  I have also restarted the controller machine but there is no change in
  status.of VM   its still showing as deleting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291489] [NEW] list-secgroup fail if no secgroups defined for server

2014-03-12 Thread Bhuvan Arumugam
Public bug reported:

No issues if there are atleast 1 secgroup defined for the server.

If no secgroups are defined for the server, it fails with 400 error.

$ nova --debug list-secgroup vp25q00cs-osfe11b124f4.isg.apple.com
.
.
.
RESP: [400] CaseInsensitiveDict({'date': 'Wed, 12 Mar 2014 17:08:11 GMT', 
'content-length': '141', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61'})
RESP BODY: {"badRequest": {"message": "The server could not comply with the 
request since it is either malformed or otherwise incorrect.", "code": 400}}

DEBUG (shell:740) The server could not comply with the request since it is 
either malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61)
Traceback (most recent call last):
  File "/Library/Python/2.7/site-packages/novaclient/shell.py", line 737, in 
main
OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
  File "/Library/Python/2.7/site-packages/novaclient/shell.py", line 673, in 
main
args.func(self.cs, args)
  File "/Library/Python/2.7/site-packages/novaclient/v1_1/shell.py", line 1904, 
in do_list_secgroup
groups = server.list_security_group()
  File "/Library/Python/2.7/site-packages/novaclient/v1_1/servers.py", line 
328, in list_security_group
return self.manager.list_security_group(self)
  File "/Library/Python/2.7/site-packages/novaclient/v1_1/servers.py", line 
883, in list_security_group
base.getid(server), 'security_groups', SecurityGroup)
  File "/Library/Python/2.7/site-packages/novaclient/base.py", line 61, in _list
_resp, body = self.api.client.get(url)
  File "/Library/Python/2.7/site-packages/novaclient/client.py", line 229, in 
get
return self._cs_request(url, 'GET', **kwargs)
  File "/Library/Python/2.7/site-packages/novaclient/client.py", line 213, in 
_cs_request
**kwargs)
  File "/Library/Python/2.7/site-packages/novaclient/client.py", line 195, in 
_time_request
resp, body = self.request(url, method, **kwargs)
  File "/Library/Python/2.7/site-packages/novaclient/client.py", line 189, in 
request
raise exceptions.from_response(resp, body, url, method)
BadRequest: The server could not comply with the request since it is either 
malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61)
ERROR: The server could not comply with the request since it is either 
malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291489

Title:
  list-secgroup fail if no secgroups defined for server

Status in OpenStack Compute (Nova):
  New

Bug description:
  No issues if there are atleast 1 secgroup defined for the server.

  If no secgroups are defined for the server, it fails with 400 error.

  $ nova --debug list-secgroup vp25q00cs-osfe11b124f4.isg.apple.com
  .
  .
  .
  RESP: [400] CaseInsensitiveDict({'date': 'Wed, 12 Mar 2014 17:08:11 GMT', 
'content-length': '141', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61'})
  RESP BODY: {"badRequest": {"message": "The server could not comply with the 
request since it is either malformed or otherwise incorrect.", "code": 400}}

  DEBUG (shell:740) The server could not comply with the request since it is 
either malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61)
  Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/novaclient/shell.py", line 737, in 
main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File "/Library/Python/2.7/site-packages/novaclient/shell.py", line 673, in 
main
  args.func(self.cs, args)
File "/Library/Python/2.7/site-packages/novaclient/v1_1/shell.py", line 
1904, in do_list_secgroup
  groups = server.list_security_group()
File "/Library/Python/2.7/site-packages/novaclient/v1_1/servers.py", line 
328, in list_security_group
  return self.manager.list_security_group(self)
File "/Library/Python/2.7/site-packages/novaclient/v1_1/servers.py", line 
883, in list_security_group
  base.getid(server), 'security_groups', SecurityGroup)
File "/Library/Python/2.7/site-packages/novaclient/base.py", line 61, in 
_list
  _resp, body = self.api.client.get(url)
File "/Library/Python/2.7/site-packages/novaclient/client.py", line 229, in 
get
  return self._cs_request(url, 'GET', **kwargs)
File "/Library/Python/2.7/site-packages/novaclient/client.py", line 213, in 
_cs_request
  **kwargs)
File "/Library/Python/2.7/site-packages/novaclient/client.py", line 195, in 
_time_request
  resp, body = s

[Yahoo-eng-team] [Bug 1288230] Re: A project shouldn't be deleted when there are instances running

2014-03-12 Thread Tracy Jones
** Tags added: compute

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288230

Title:
  A project shouldn't be deleted when there are instances running

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  currently, a project that has an instance (or instances) can be deleted 
without even a warning message in the Horizon by a user with an administrative 
permissions. An active project (meaning a project that has instances running) 
should have the protection from deletion. If the administrator would like to 
delete it he should delete the instances first. 

  Version-Release number of selected component (if applicable):
  openstack-nova-cert-2013.2.2-2.el6ost.noarch
  python-novaclient-2.15.0-2.el6ost.noarch
  openstack-nova-common-2013.2.2-2.el6ost.noarch
  openstack-nova-api-2013.2.2-2.el6ost.noarch
  openstack-nova-compute-2013.2.2-2.el6ost.noarch
  openstack-nova-conductor-2013.2.2-2.el6ost.noarch
  openstack-nova-novncproxy-2013.2.2-2.el6ost.noarch
  openstack-nova-scheduler-2013.2.2-2.el6ost.noarch
  python-nova-2013.2.2-2.el6ost.noarch
  openstack-nova-console-2013.2.2-2.el6ost.noarch
  openstack-nova-network-2013.2.2-2.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Create an new project.
  2. Launch one (or more) instances. 
  3. Try to delete the project with the admin.

  Actual results:
  The instances are still running, but they are accessible only through the 
admin -> intances tab.

  Expected results:
  The administrator shouldn't be able to delete the project as long as there 
are instances running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277168] Re: having oslo.sphinx in namespace package causes issues with devstack

2014-03-12 Thread Doug Hellmann
tripleo is still using oslo.sphinx in their new os-cloud-config
repository

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277168

Title:
  having oslo.sphinx in namespace package causes issues with devstack

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in Django OpenStack Auth:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Core Infrastructure:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Tempest:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Committed

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2014-January/023759.html

  We've decided to rename oslo.sphinx to oslosphinx. This will require
  small changes in the doc builds for a lot of the other projects.

  The problem seems to be when we pip install -e oslo.config on the
  system, then pip install oslo.sphinx in a venv. oslo.config is
  unavailable in the venv, apparently because the namespace package for
  o.s causes the egg-link for o.c to be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290807] Re: Resize on vCenter failed becausee of _VM_REFS_CACHE

2014-03-12 Thread Shawn Hartsock
** Changed in: nova
Milestone: None => icehouse-rc1

** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

** Changed in: openstack-vmwareapi-team
   Status: New => In Progress

** Changed in: openstack-vmwareapi-team
   Importance: Undecided => Critical

** Changed in: nova
   Status: Confirmed => In Progress

** Changed in: nova
 Assignee: (unassigned) => Feng Xi Yan (yanfengxi)

** Changed in: nova
   Importance: Critical => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290807

Title:
  Resize on vCenter failed becausee of _VM_REFS_CACHE

Status in OpenStack Compute (Nova):
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  This bug is for nova/master branch.

  The resize action in vmware environment always fails.

  The reason is that nova resized the -orign rather than the new
  cloned vm.

  It is caused by the outdated vm_ref in _VM_REFS_CACHE.

  In nova/virt/vmwareapi/vmops.py:

  def finish_migration(self, context, migration, instance, disk_info,
   network_info, image_meta, resize_instance=False,
   block_device_info=None, power_on=True):
  """Completes a resize, turning on the migrated instance."""
  if resize_instance:
  client_factory = self._session._get_vim().client.factory
  vm_ref = vm_util.get_vm_ref(self._session, instance)
  vm_resize_spec = vm_util.get_vm_resize_spec(client_factory,
  instance)
  reconfig_task = self._session._call_method(
  self._session._get_vim(),
  "ReconfigVM_Task", vm_ref,
  spec=vm_resize_spec)
     ...

  From this code, we can see we get vm_ref by vm_util.get_vm_ref.

  In nova/virt/vmwareapi/vm_util.py

  @vm_ref_cache_from_instance
  def get_vm_ref(session, instance):
  """Get reference to the VM through uuid or vm name."""
  uuid = instance['uuid']
  vm_ref = (_get_vm_ref_from_vm_uuid(session, uuid) or
    _get_vm_ref_from_extraconfig(session, uuid) or
    _get_vm_ref_from_uuid(session, uuid) or
    _get_vm_ref_from_name(session, instance['name']))
  if vm_ref is None:
  raise exception.InstanceNotFound(instance_id=uuid)
  return vm_ref

  The "get_vm_ref" method is decorated by "vm_ref_cache_from_instance".
  "vm_ref_cache_from_instance" will firstly check cache variable 
_VM_REFS_CACHE. But _VM_REFS_CACHE contains a outdated vm_ref(The original one) 
keyed by our instance_uuid, because the virtual machine's name is changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1138408] Re: delete_tap_interface method is needed

2014-03-12 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova/grizzly
   Importance: Undecided => Low

** Changed in: nova/grizzly
   Status: New => In Progress

** Changed in: nova/grizzly
 Assignee: (unassigned) => Nikola Đipanov (ndipanov)

** Changed in: nova/grizzly
Milestone: None => 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1138408

Title:
  delete_tap_interface method is needed

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress

Bug description:
  In nova using libvirt, there is a method to create tap interface under 
linux_net.py but there is not one for removing them.
  Usually, nova plougins use either the delete_ovs_vif_port that invokes 
OVS-specific commands and will not work in an environment OVS-free, and the 
second available option is under QuantumLinuxBridgeInterfaceDriver::unplug but 
this one is attached to linux bridge. So, there is not native call for removing 
a tap interface based on the dev name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1138408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291471] [NEW] can't boot a volume from a volume that has been created from a snapshot

2014-03-12 Thread Yogev Rabl
Public bug reported:

Description of problem:
A volume that has been created from a snapshot of a volume failed to boot an 
instance with the following error: 

2014-03-12 18:03:39.790 9573 ERROR nova.compute.manager 
[req-f67dabd7-f013-483a-a386-d5a511b86be7 1654b1a85ba647df87fc9258962949fb 
87761b8cc7d34be29063ad24073b2172] [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] Instance failed block d
evice setup
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] Traceback (most recent call last):
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1387, in 
_prep_block_device
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] self._await_block_device_map_created) 
+
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 283, in 
attach_block_devices
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] block_device_mapping)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 170, in 
attach
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] connector)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 176, in wrapper
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] res = method(self, ctx, volume_id, 
*args, **kwargs)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 274, in 
initialize_connection
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] connector)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 321, in 
initialize_connection
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] {'connector': 
connector})[1]['connection_info']
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 250, in 
_action
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] return self.api.client.post(url, 
body=body)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in post
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] return self._cs_request(url, 'POST', 
**kwargs)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 174, in 
_cs_request
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] **kwargs)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 157, in request
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] raise exceptions.from_response(resp, 
body)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] ClientException: The server has either 
erred or is incapable of performing the requested operation. (HTTP 500) 
(Request-ID: req-e990ac94-97d9-41f3-b1e1-ca63e7d1d2bc)

2014-03-12 18:03:40.289 9573 ERROR nova.openstack.common.rpc.amqp 
[req-f67dabd7-f013-483a-a386-d5a511b86be7 1654b1a85ba647df87fc9258962949fb 
87761b8cc7d34be29063ad24073b2172] Exception during message handling
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp **args)
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172

[Yahoo-eng-team] [Bug 1291465] [NEW] Allow user defined ids.

2014-03-12 Thread Haneef Ali
Public bug reported:

This is a feature request

We should alow user supplied domain_id/user_id. There are some policy
defintions in policy.v2.cloudadmin.json which relies on user being on
particular domain.   We really don't want to have UUID in policy files
to identify the domain_id.   One way to achive this to bootstrap the
entries via raw sql.  It will be better if we allow the same to be
achieved  via REST api.  So basically  the ids' are given by the caller,
If the caller doesn't send  the id then generate UUID

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291465

Title:
  Allow user defined ids.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This is a feature request

  We should alow user supplied domain_id/user_id. There are some policy
  defintions in policy.v2.cloudadmin.json which relies on user being on
  particular domain.   We really don't want to have UUID in policy files
  to identify the domain_id.   One way to achive this to bootstrap the
  entries via raw sql.  It will be better if we allow the same to be
  achieved  via REST api.  So basically  the ids' are given by the
  caller, If the caller doesn't send  the id then generate UUID

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289935] Re: Revoke API calls non-existant method in revoke map syncronize

2014-03-12 Thread James Page
** Also affects: keystone (Ubuntu Trusty)
   Importance: Critical
 Assignee: Corey Bryant (corey.bryant)
   Status: Confirmed

** Changed in: keystone (Ubuntu Trusty)
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1289935

Title:
  Revoke API calls non-existant method in revoke map syncronize

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in “keystone” package in Ubuntu:
  In Progress
Status in “keystone” source package in Trusty:
  In Progress

Bug description:
  The "revoke_api" calls a non-existent method on the revoke tree object
  during the synchronize method. This results in a non-recoverable error
  in checking validity of a token if there are expired revocation
  events.

  Code block in question:

  
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/contrib/revoke/core.py?id=a240705b07b852616e39a2b93253f0a9a09a3ef9#n79

  with self._store.get_lock(_TREE_KEY):
  for e in self._current_events:
  if e.revoked_at < cutoff:
  self.revoke_map.remove(e)
  self._current_events.remove(e)
  else:
  break

  The code should call self.revoke_map.remove_event(e) not
  self.revoke_map.remove(e).

  Example traceback:

  2014-03-08 20:20:59.338 TRACE keystone.common.wsgi TypeError: object of type 
'NoneType' has no len()
  2014-03-08 20:20:59.338 TRACE keystone.common.wsgi
  2014-03-08 20:20:59.340 INFO eventlet.wsgi.server [-] 172.16.28.1 - - 
[08/Mar/2014 20:20:59] "POST /v2.0/tokens HTTP/1.1" 400 239 0.004711
  2014-03-08 20:20:59.351 DEBUG keystone.middleware.core [-] Auth token not in 
the request header. Will not build auth context. from (pid=14327) 
process_request /opt/stack/keystone/keystone/middleware/core.py:253
  2014-03-08 20:20:59.352 DEBUG keystone.common.wsgi [-] arg_dict: {} from 
(pid=14327) __call__ /opt/stack/keystone/keystone/common/wsgi.py:180
  2014-03-08 20:20:59.353 ERROR keystone.common.wsgi [-] object of type 
'NoneType' has no len()
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi Traceback (most recent 
call last):
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 205, in __call__
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/openstack/common/versionutils.py", line 102, in 
wrapped
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi return func(*args, 
**kwargs)
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/token/controllers.py", line 97, in authenticate
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi context, auth)
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/token/controllers.py", line 255, in 
_authenticate_local
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi if len(username) > 
CONF.max_param_size:
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi TypeError: object of type 
'NoneType' has no len()
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi
  2014-03-08 20:20:59.355 INFO eventlet.wsgi.server [-] 172.16.28.1 - - 
[08/Mar/2014 20:20:59] "POST /v2.0/tokens HTTP/1.1" 400 239 0.004078
  2014-03-08 20:20:59.385 DEBUG keystone.common.wsgi [-] arg_dict: {} from 
(pid=14327) __call__ /opt/stack/keystone/keystone/common/wsgi.py:180
  2014-03-08 20:20:59.386 INFO eventlet.wsgi.server [-] 172.16.28.100 - - 
[08/Mar/2014 20:20:59] "GET / HTTP/1.1" 300 1103 0.001378
  2014-03-08 20:20:59.401 DEBUG keystone.middleware.core [-] Auth token not in 
the request header. Will not build auth context. from (pid=14327) 
process_request /opt/stack/keystone/keystone/middleware/core.py:253
  2014-03-08 20:20:59.403 DEBUG keystone.common.wsgi [-] arg_dict: {} from 
(pid=14327) __call__ /opt/stack/keystone/keystone/common/wsgi.py:180
  2014-03-08 20:20:59.412 DEBUG keystone.notifications [-] CADF Event: 
{'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator': 
{'typeURI': 'service/security/account/user', 'host': {'agent': 
'python-requests/1.2.3 CPython/2.7.5+ Linux/3.11.0-12-generic', 'address': 
'172.16.28.100'}, 'id': 'openstack:b0d57b38-6f65-43aa-b0ef-b807db297e5b', 
'name': u'5b55216e7b1742978dca4ce4f721a6d3'}, 'target': {'typeURI': 
'service/security/account/user', 'id': 
'openstack:006ecd17-f59d-4bc4-9fb5-cde076e7607c'}, 'observer': {'typeURI': 
'service/security', 'id': 'openstack:5b7eecb3-de9b-486c-9683-11d50d965cf8'}, 
'eventType': 'activity', 'eventTime': '2014-03-08T19:20:59.412018+', 
'action': 'authenticate', 'outcome': 'pending', 'id': 
'openstack:41e1caa6-4e8d-47f9-8a87-3e5d23c2e22d'} from (pid=14327) 
_send_audit_notification /opt/stack/keystone/k

[Yahoo-eng-team] [Bug 1171601] Re: dbapi_use_tpool exposes problems in eventlet

2014-03-12 Thread Victor Sergeyev
After the patch https://review.openstack.org/#/c/60031/ (Remove eventlet
tpool from common db.api) was merged, eventlet tpool is not used for
work with database in Oslo code

** No longer affects: oslo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171601

Title:
  dbapi_use_tpool exposes problems in eventlet

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The dbapi_use_tpool option doesn't work completely because of problems
  in eventlet.  Even though this is technically an eventlet issue, it's
  important for Nova so this bug is to track the issue getting fixed in
  eventlet.

  There is a patch in progress here:

  https://bitbucket.org/eventlet/eventlet/pull-request/29/fix-use-of-
  semaphore-with-tpool-issue-137/diff

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291442] [NEW] Top bar padding issue on older versions of IE

2014-03-12 Thread Justin Pomeroy
Public bug reported:

On Internet Explorer versions 8 and 9, the top bar has too much bottom
padding. This is not a problem on IE 10 (or Chrome, Firefox, etc.).
Removing the bottom padding fixes the problem on these browsers, but of
course then causes problems in the other browsers.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "ie9_topbar_padding_bottom.png"
   
https://bugs.launchpad.net/bugs/1291442/+attachment/4020299/+files/ie9_topbar_padding_bottom.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291442

Title:
  Top bar padding issue on older versions of IE

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Internet Explorer versions 8 and 9, the top bar has too much bottom
  padding. This is not a problem on IE 10 (or Chrome, Firefox, etc.).
  Removing the bottom padding fixes the problem on these browsers, but
  of course then causes problems in the other browsers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291431] [NEW] New Neutron unauthorised exception

2014-03-12 Thread Julie Pichon
Public bug reported:

In case of token expiry, the Neutron client tries to refresh the token -
but if the auth_url isn't specified it can fail to do so (auth_url isn't
a required value when authenticating with a token). The Neutron client
was recently updated to properly handle this case and raise an
exception.

https://review.openstack.org/#/c/53461/

This new exception should be added to the dashboard list of unauthorised
exceptions, I think, in order to be handled it correctly - that is,
return a forbidden/unauthorised error rather than internal server error.

https://github.com/openstack/horizon/blob/master/openstack_dashboard/exceptions.py

** Affects: horizon
 Importance: Medium
 Status: New


** Tags: low-hanging-fruit

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291431

Title:
  New Neutron unauthorised exception

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In case of token expiry, the Neutron client tries to refresh the token
  - but if the auth_url isn't specified it can fail to do so (auth_url
  isn't a required value when authenticating with a token). The Neutron
  client was recently updated to properly handle this case and raise an
  exception.

  https://review.openstack.org/#/c/53461/

  This new exception should be added to the dashboard list of
  unauthorised exceptions, I think, in order to be handled it correctly
  - that is, return a forbidden/unauthorised error rather than internal
  server error.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/exceptions.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289935] Re: Revoke API calls non-existant method in revoke map syncronize

2014-03-12 Thread Chris J Arges
This also affects the keystone version in Trusty.

** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: keystone (Ubuntu)
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1289935

Title:
  Revoke API calls non-existant method in revoke map syncronize

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in “keystone” package in Ubuntu:
  New

Bug description:
  The "revoke_api" calls a non-existent method on the revoke tree object
  during the synchronize method. This results in a non-recoverable error
  in checking validity of a token if there are expired revocation
  events.

  Code block in question:

  
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/contrib/revoke/core.py?id=a240705b07b852616e39a2b93253f0a9a09a3ef9#n79

  with self._store.get_lock(_TREE_KEY):
  for e in self._current_events:
  if e.revoked_at < cutoff:
  self.revoke_map.remove(e)
  self._current_events.remove(e)
  else:
  break

  The code should call self.revoke_map.remove_event(e) not
  self.revoke_map.remove(e).

  Example traceback:

  2014-03-08 20:20:59.338 TRACE keystone.common.wsgi TypeError: object of type 
'NoneType' has no len()
  2014-03-08 20:20:59.338 TRACE keystone.common.wsgi
  2014-03-08 20:20:59.340 INFO eventlet.wsgi.server [-] 172.16.28.1 - - 
[08/Mar/2014 20:20:59] "POST /v2.0/tokens HTTP/1.1" 400 239 0.004711
  2014-03-08 20:20:59.351 DEBUG keystone.middleware.core [-] Auth token not in 
the request header. Will not build auth context. from (pid=14327) 
process_request /opt/stack/keystone/keystone/middleware/core.py:253
  2014-03-08 20:20:59.352 DEBUG keystone.common.wsgi [-] arg_dict: {} from 
(pid=14327) __call__ /opt/stack/keystone/keystone/common/wsgi.py:180
  2014-03-08 20:20:59.353 ERROR keystone.common.wsgi [-] object of type 
'NoneType' has no len()
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi Traceback (most recent 
call last):
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 205, in __call__
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/openstack/common/versionutils.py", line 102, in 
wrapped
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi return func(*args, 
**kwargs)
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/token/controllers.py", line 97, in authenticate
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi context, auth)
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/token/controllers.py", line 255, in 
_authenticate_local
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi if len(username) > 
CONF.max_param_size:
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi TypeError: object of type 
'NoneType' has no len()
  2014-03-08 20:20:59.353 TRACE keystone.common.wsgi
  2014-03-08 20:20:59.355 INFO eventlet.wsgi.server [-] 172.16.28.1 - - 
[08/Mar/2014 20:20:59] "POST /v2.0/tokens HTTP/1.1" 400 239 0.004078
  2014-03-08 20:20:59.385 DEBUG keystone.common.wsgi [-] arg_dict: {} from 
(pid=14327) __call__ /opt/stack/keystone/keystone/common/wsgi.py:180
  2014-03-08 20:20:59.386 INFO eventlet.wsgi.server [-] 172.16.28.100 - - 
[08/Mar/2014 20:20:59] "GET / HTTP/1.1" 300 1103 0.001378
  2014-03-08 20:20:59.401 DEBUG keystone.middleware.core [-] Auth token not in 
the request header. Will not build auth context. from (pid=14327) 
process_request /opt/stack/keystone/keystone/middleware/core.py:253
  2014-03-08 20:20:59.403 DEBUG keystone.common.wsgi [-] arg_dict: {} from 
(pid=14327) __call__ /opt/stack/keystone/keystone/common/wsgi.py:180
  2014-03-08 20:20:59.412 DEBUG keystone.notifications [-] CADF Event: 
{'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator': 
{'typeURI': 'service/security/account/user', 'host': {'agent': 
'python-requests/1.2.3 CPython/2.7.5+ Linux/3.11.0-12-generic', 'address': 
'172.16.28.100'}, 'id': 'openstack:b0d57b38-6f65-43aa-b0ef-b807db297e5b', 
'name': u'5b55216e7b1742978dca4ce4f721a6d3'}, 'target': {'typeURI': 
'service/security/account/user', 'id': 
'openstack:006ecd17-f59d-4bc4-9fb5-cde076e7607c'}, 'observer': {'typeURI': 
'service/security', 'id': 'openstack:5b7eecb3-de9b-486c-9683-11d50d965cf8'}, 
'eventType': 'activity', 'eventTime': '2014-03-08T19:20:59.412018+', 
'action': 'authenticate', 'outcome': 'pending', 'id': 
'openstack:41e1caa6-4e8d-47f9-8a87-3e5d23c2e22d'} from (pid=14327) 
_send_audit_notification /opt/stack/keystone/keystone/notifications.py:289
  2014-03-08 20:20:59.447 DEBUG keystone.notifications

[Yahoo-eng-team] [Bug 1275415] Re: Absolute path to blkid.

2014-03-12 Thread Scott Moser
is there a reason that you wouldn't have the right path set up?
My general feeling is that hard coded full paths defeat the purpose of $PATH 
with exactly *no* benefit and added knowledge that is possibly wrong.

Ie, if we use '/sbin/blkid', then if blkid is in$PATH but *not* in
/sbin, then i've just broken something that otherwise would have worked
fine.  it also makes mock tests *more* difficult.


** Changed in: cloud-init
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1275415

Title:
  Absolute path to blkid.

Status in Init scripts for use on cloud images:
  Won't Fix

Bug description:
  Dear Scott and everybody,

  in Debian, we needed to provide the absolute path to blkid as follows
  in order to run the regression tests.

  Description: Fix the path to blkid in test suite
  Author: Thomas Goirand 
  Last-Update: 2013-05-28

  --- cloud-init-0.7.2.orig/cloudinit/util.py
  +++ cloud-init-0.7.2/cloudinit/util.py
  @@ -998,7 +998,7 @@ def find_devs_with(criteria=None, oforma
 LABEL=
 UUID=
   """
  -blk_id_cmd = ['blkid']
  +blk_id_cmd = ['/sbin/blkid']
   options = []
   if criteria:
   # Search for block devices with tokens named NAME that

  Could you consider applying it ?

  Cheers,

  -- 
  Charles Plessy
  Tsurumi, Kanagawa, Japan
  Uploaded of cloud-init in Debian.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1275415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291417] [NEW] Radware lbaas driver need handle pool members coming from different subnets

2014-03-12 Thread Evgeny Fedoruk
Public bug reported:

Radware LBaaS driver should be able to handle pool members when each
member is on different sub-net.

This is dependent on change https://review.openstack.org/#/c/69009/

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas radware

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

** Description changed:

  Radware LBaaS driver should be able to handle pool members when each
  member is on different sub-net.
+ 
+ This is dependent on change https://review.openstack.org/#/c/69009/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291417

Title:
  Radware lbaas driver need handle pool members coming from different
  subnets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Radware LBaaS driver should be able to handle pool members when each
  member is on different sub-net.

  This is dependent on change https://review.openstack.org/#/c/69009/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291423] [NEW] revocation events sync slows responses to all authenticated calls

2014-03-12 Thread Adam Young
Public bug reported:

There is a noticable lag when doing multiple calls to Keystone.  The
server shows in the log:

 KVS lock acquired for: os-revoke-events acquire
/opt/stack/keystone/keystone/common/kvs/core.py:375

Putting the following delay in mitigates it significantly

delta = datetime.timedelta(seconds=1)
if self._last_fetch and self._last_fetch > timeutils.utcnow() + delta:
return

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291423

Title:
  revocation events sync slows responses to all authenticated calls

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There is a noticable lag when doing multiple calls to Keystone.  The
  server shows in the log:

   KVS lock acquired for: os-revoke-events acquire
  /opt/stack/keystone/keystone/common/kvs/core.py:375

  Putting the following delay in mitigates it significantly

  delta = datetime.timedelta(seconds=1)
  if self._last_fetch and self._last_fetch > timeutils.utcnow() + delta:
  return

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291414] [NEW] image create/edit image wizard name with white spaces only

2014-03-12 Thread Benny Kopilov
Public bug reported:

Description of problem:
When we create /modify image name , the user interface does not allow to set 
empty string 
When user set single white space , the image name changed to  image_id .

when name contains only what spaces it should be denied as empty string
.

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
create image name from dashboard .
change the name with edit option , set spaces only .
save and name changed to image_id .


Actual results:
the image name taken from image_id in database.


Expected results:
white spaces name should be rejected by ui .

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291414

Title:
  image create/edit image wizard name with white spaces only

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  When we create /modify image name , the user interface does not allow to set 
empty string 
  When user set single white space , the image name changed to  image_id .

  when name contains only what spaces it should be denied as empty
  string .

  Version-Release number of selected component (if applicable):

  
  How reproducible:
  always

  Steps to Reproduce:
  create image name from dashboard .
  change the name with edit option , set spaces only .
  save and name changed to image_id .

  
  Actual results:
  the image name taken from image_id in database.

  
  Expected results:
  white spaces name should be rejected by ui .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291161] Re: Need a property in glance metadata to indicate the vm id when create vm snapshot

2014-03-12 Thread sahid
This need a blueprint to be accepted.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291161

Title:
  Need a property in glance metadata to indicate the vm id when create
  vm snapshot

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In order to manage the snapshot of vm in glance conveniently, we need know 
which images in glance are captured from VM.
  So we need add new property in glance metadata when create the vm snapshot, 
for example: server_id = vm uuid. This new property will help to filter the 
image when use the glance image-list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291393] [NEW] domain_id in User/Group/Project should be immutable

2014-03-12 Thread Henry Nash
Public bug reported:

Today we allow the domain_id in User, Group and Project entities to be
updated….effectively moving the entity between domains.  With today's
policy capability this represents a potential security hole if you are
trying to enforce strict domain admin type of roles.  We should allow a
cloud provider to disable this current update ability…and make the
domain_id attribute immutable in the same way we do for the id of the
entity.

** Affects: keystone
 Importance: High
 Assignee: Henry Nash (henry-nash)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291393

Title:
  domain_id in User/Group/Project should be immutable

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Today we allow the domain_id in User, Group and Project entities to be
  updated….effectively moving the entity between domains.  With today's
  policy capability this represents a potential security hole if you are
  trying to enforce strict domain admin type of roles.  We should allow
  a cloud provider to disable this current update ability…and make the
  domain_id attribute immutable in the same way we do for the id of the
  entity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290895] Re: Difficult to understand message when using incorrect role against object in Neutron

2014-03-12 Thread Eugene Nikanorov
That's an intended behavior. User that doesn't have access to a resource
should not know whether it exists or not

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290895

Title:
  Difficult to understand message when using incorrect role against
  object in Neutron

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When a user runs an action against an object in neutron for which they
  don't have authority to (perhaps their role allows read of the object,
  but not update), they get the message "The resource could not be found".
  For example: User doesn't have the privilege to edit a network and
  attempts doing that but ends up getting the resource not found message.

  This is a bad message because the object they just read in is now
  stating that it does not exist. This is not true, the root issue is that they
  do not have authority to it.

   One can argue that for security reasons, we should state that the object
   does not exist. However, it creates a odd scenario where you have
   certain roles that can read an object, but then not write to it.

   I'm proposing that we change the message to "The resource could not be
   found or user's role does not have sufficient privileges to run the
   operation."

  Two identified test cases applicable to this would be the remove/edit
  networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1290895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291366] [NEW] documentation should advice against using pki_setup and ssl_setup

2014-03-12 Thread Adam Young
Public bug reported:

Both of these tools generate  Self-signed CA certificates.  As such,
they are only appropriate for development deployments, and should be
treated as such.  While sites with mature PKI policies would recognize
this, that majority of people new to Open Stack are not PKI experts, and
are using the provided tools.  The
http://docs.openstack.org/developer/keystone/configuration.html
#certificates-for-pki  should state this clearly.

** Affects: keystone
 Importance: Undecided
 Assignee: Adam Young (ayoung)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Adam Young (ayoung)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291366

Title:
  documentation should advice against using pki_setup and ssl_setup

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Both of these tools generate  Self-signed CA certificates.  As such,
  they are only appropriate for development deployments, and should be
  treated as such.  While sites with mature PKI policies would recognize
  this, that majority of people new to Open Stack are not PKI experts,
  and are using the provided tools.  The
  http://docs.openstack.org/developer/keystone/configuration.html
  #certificates-for-pki  should state this clearly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291367] [NEW] Leading, trailing spaces are truncated for security group names on dashboard

2014-03-12 Thread Abhishek Kekane
Public bug reported:

When you create the security group with leading and trailing white spaces in 
security group name, you can see it using
"nova secgroup-list" command, where as in Horizon dashboard under Security 
Group, the same security group name is displayed without leading and trailing 
white spaces.

If you click on Edit security group, then security group name will be
displayed with leading and trailing white spaces.

Steps to reproduce:

1. Create security group with leading and trailing white spaces in name.
 nova secgroup-create ' Security Group Name ' 'Security Group 
Description'

2. Run "secgroup-list" from command line you can see Security Group Name with 
leading and trailing spaces.
 nova secgroup-list

++-++
| Id | Name| Description|
++-++
| 2  |  Security Group Name  | Security Group Description |
| 1  | default | default|
++-++

3. Login to the horizon dashboard.
4. Click on "Access & Security link from the tab.
5. It will show you the list of available security groups.
6. Check for the security group name you have created.
7. Here it will display the name as "Security Group Name" instead of " 
Security Group Name ".
8. Click on More button and select "Edit Security Group"
9. In Popup box security group will be displayed with leading and trailing 
white spaces.

IMO, We should display the security group names on the dashboard as it
is stored in the database i.e. with leading and trailing white spaces.

If to remove the leading and trailing spaces from the name while creating
security groups, we need to write migration script to update the existing
security groups to remove the leading and trailing spaces in existing names.

** Affects: horizon
 Importance: Undecided
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New


** Tags: ntt

** Changed in: horizon
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

** Description changed:

  When you create the security group with leading and trailing white spaces in 
security group name, you can see it using
  "nova secgroup-list" command, where as in Horizon dashboard under Security 
Group, the same security group name is displayed without leading and trailing 
white spaces.
  
  If you click on Edit security group, then security group name will be
  displayed with leading and trailing white spaces.
  
- 
  Steps to reproduce:
  
  1. Create security group with leading and trailing white spaces in name.
-  nova secgroup-create ' Security Group Name ' 'Security Group 
Description'
+  nova secgroup-create ' Security Group Name ' 'Security Group 
Description'
  
  2. Run "secgroup-list" from command line you can see Security Group Name with 
leading and trailing spaces.
-  nova secgroup-list
+  nova secgroup-list
  
- ++-++
- | Id | Name| Description|
- ++-++
- | 2  |  Security Group Name  | Security Group Description |
- | 1  | default | default|
- ++-++
+ ++-++
+ | Id | Name| Description|
+ ++-++
+ | 2  |  Security Group Name  | Security Group Description |
+ | 1  | default | default|
+ ++-++
  
  3. Login to the horizon dashboard.
  4. Click on "Access & Security link from the tab.
  5. It will show you the list of available security groups.
  6. Check for the security group name you have created.
  7. Here it will display the name as "Security Group Name" instead of " 
Security Group Name ".
  8. Click on More button and select "Edit Security Group"
  9. In Popup box security group will be displayed with leading and trailing 
white spaces.
  
- 
- IMO, We should display the security group names on the dashboard as it is 
stored in
- the database.
+ IMO, We should display the security group names on the dashboard as it
+ is stored in the database i.e. with leading and trailing white spaces.
  
  If to remove the leading and trailing spaces from the name while creating
  security groups, we need to write migration script to update the existing
  security groups to remove the leading and trailing spaces in existing names.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStac

[Yahoo-eng-team] [Bug 1291364] [NEW] _destroy_evacuated_instances fails randomly with high number of deleted instances

2014-03-12 Thread Luis Fernandez
Public bug reported:

In our production environment (2013.2.1), we're facing a random error
thrown while starting nova-compute in Hyper-V nodes.

The following exception is thrown while calling
'_destroy_evacuated_instances':

16:30:58.802 7248 ERROR nova.openstack.common.threadgroup [-] 'NoneType' object 
is not iterable
2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
(...)
2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup   File 
"C:\Python27\lib\site-packages\nova\compute\manager.py", line 532, in 
_get_instances_on_driver
2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
name_map = dict((instance['name'], instance) for instance in instances)
2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup TypeError: 
'NoneType' object is not iterable

Full trace: http://paste.openstack.org/show/73243/

Our first guess is that this problem is related with number of deleted
instances in our deployment (~3000), they're all fetched in order to
check evacuated instances (as Hyper-V is not implementing
"list_instance_uuids").

In the case of KVM, this error is not happening as it's using a smarter
method to get this list based on the UUID of the instances.

Although this is being reported using Hyper-V, it's a problem that could
occur in other drivers not implementing "list_instance_uuids"

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute conductor hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291364

Title:
  _destroy_evacuated_instances fails randomly with high number of
  deleted instances

Status in OpenStack Compute (Nova):
  New

Bug description:
  In our production environment (2013.2.1), we're facing a random error
  thrown while starting nova-compute in Hyper-V nodes.

  The following exception is thrown while calling
  '_destroy_evacuated_instances':

  16:30:58.802 7248 ERROR nova.openstack.common.threadgroup [-] 'NoneType' 
object is not iterable
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  (...)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup   File 
"C:\Python27\lib\site-packages\nova\compute\manager.py", line 532, in 
_get_instances_on_driver
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
name_map = dict((instance['name'], instance) for instance in instances)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
TypeError: 'NoneType' object is not iterable

  Full trace: http://paste.openstack.org/show/73243/

  Our first guess is that this problem is related with number of deleted
  instances in our deployment (~3000), they're all fetched in order to
  check evacuated instances (as Hyper-V is not implementing
  "list_instance_uuids").

  In the case of KVM, this error is not happening as it's using a
  smarter method to get this list based on the UUID of the instances.

  Although this is being reported using Hyper-V, it's a problem that
  could occur in other drivers not implementing "list_instance_uuids"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291336] Re: Unused parameter in attributes.py

2014-03-12 Thread shihanzhang
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291336

Title:
  Unused parameter in attributes.py

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  there are many method which hava unuse parameter, for exaple:
  def _validate_uuid_or_none(data, valid_values=None):
  if data is not None:
  return _validate_uuid(data)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >