[Yahoo-eng-team] [Bug 1504892] [NEW] create a version package

2015-10-11 Thread Steve Martinelli
Public bug reported:

the logic for handling the way keystone detects versions (v2 vs v3)
should be isolated to it's own top level python package. there are entry
points for these files in keystone-paste.ini (pre-Liberty) and setup.cfg
(as of Liberty), so we need to make sure this work is tracked and
documented.

** Affects: keystone
 Importance: Undecided
 Assignee: Steve Martinelli (stevemar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1504892

Title:
  create a version package

Status in Keystone:
  In Progress

Bug description:
  the logic for handling the way keystone detects versions (v2 vs v3)
  should be isolated to it's own top level python package. there are
  entry points for these files in keystone-paste.ini (pre-Liberty) and
  setup.cfg (as of Liberty), so we need to make sure this work is
  tracked and documented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1504892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504891] [NEW] http docs are out of date

2015-10-11 Thread Steve Martinelli
Public bug reported:

The docs in this section: http://docs.openstack.org/developer/keystone
/http-api.html#i-am-a-deployer

reference an editable section in keystone-paste.ini ([app:service_v3]).

this section is now managed by entry points in setup.cfg

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1504891

Title:
  http docs are out of date

Status in Keystone:
  New

Bug description:
  The docs in this section: http://docs.openstack.org/developer/keystone
  /http-api.html#i-am-a-deployer

  reference an editable section in keystone-paste.ini
  ([app:service_v3]).

  this section is now managed by entry points in setup.cfg

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1504891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456064] Re: VMWare instance missing ip address when using config drive

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456064

Title:
  VMWare instance missing ip address when using config drive

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Having same reason with race bug: https://bugs.launchpad.net/nova/+bug/1249065
  http://status.openstack.org/elastic-recheck/index.html#1249065

  when vmware driver using config drive, the IP address maybe not get
  injected, because of the missing of instance nw_info cache.

  Here is related code in nova vmware driver, and config drive codes:

  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L671
  inst_md = instance_metadata.InstanceMetadata(instance,
   content=injected_files,
   extra_md=extra_md)

  https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L146
  # get network info, and the rendered network template
  if network_info is None:
  network_info = instance.info_cache.network_info

  in vmwareapi/vmop.py, the network_info is not passed to the instance metadata 
api, so metadata api will use instance.info_cache.network_info as the network 
info. But when instance.info_cache.network_info is missing, the network info 
will be empty, too.
  This is why sometimes, VMWare instances can not get IP address injected when 
using config drive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465443] Re: Hyper-V: Live migration does not copy configdrive to new host

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465443

Title:
  Hyper-V: Live migration does not copy configdrive to new host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Performing a live migration on Hyper-V does not copy the configdrive
  to the destination. This can cause trouble, since the configdrive is
  esential. For example, performing a second live migration on the same
  instance will automatically result in an exception, since it tries to
  copy the configdrive (file does not exist) to another destination.

  This is caused by incorrectly copying the configdrive (wrong
  destination path).

  Log sample, after a LOG.info was introduced, in order to observe the
  error:

  2015-06-15 15:43:31.242 14768 INFO nova.virt.hyperv.pathutils 
[req-a85a92e9-b562-4398-b2ae-8ccbf2d1f525 70a2dc588be9409c9aea370aa119391f 
19c78e5db79444e7ac33c5af18ae29fc - - -] Copy file from 
C:\OpenStack\Instances\instance-5970\configdrive.iso to weighty-secreta
  2015-06-15 15:43:31.273 14768 INFO nova.virt.hyperv.serialconsoleops 
[req-a85a92e9-b562-4398-b2ae-8ccbf2d1f525 70a2dc588be9409c9aea370aa119391f 
19c78e5db79444e7ac33c5af18ae29fc - - -] Stopping instance instance-5970 
serial console handler.
  2015-06-15 15:43:31.289 14768 INFO nova.virt.hyperv.pathutils 
[req-a85a92e9-b562-4398-b2ae-8ccbf2d1f525 70a2dc588be9409c9aea370aa119391f 
19c78e5db79444e7ac33c5af18ae29fc - - -] Copy file from 
C:\OpenStack\Instances\instance-5970\console.log to 
\\weighty-secreta\C$\OpenStack\Instances\instance-5970\console.log

  The log sample shows the incorrect copy of configdrive.iso from the
  source ``C:\OpenStack\Instances\instance-5970\configdrive.iso`` to
  the destination ``weighty-secreta``, which is incorrect (correct:
  ``\\weighty-
  secreta\C$\OpenStack\Instances\instance-5970\configdrive.iso``) .
  The console.log file paths are correct and it is copied correctly.

  Even though the configdrive.iso destination is wrong, the copy
  operation is completed succesfully, which is why no exception is
  raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457517] Re: Unable to boot from volume when flavor disk too small

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457517

Title:
  Unable to boot from volume when flavor disk too small

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Vivid:
  Fix Committed

Bug description:
  [Impact]

   * Without the backport, booting from volume requires flavor disk size
  larger than volume size, which is wrong. This patch skips flavor disk
  size checking when booting from volume.

  [Test Case]

   * 1. create a bootable volume
 2. boot from this bootable volume with a flavor that has disk size smaller 
than the volume size
 3. error should be reported complaining disk size too small
 4. apply this patch
 5. boot from that bootable volume with a flavor that has disk size smaller 
than the volume size again
 6. boot should succeed

  [Regression Potential]

   * none

  
  Version: 1:2015.1.0-0ubuntu1~cloud0 on Ubuntu 14.04

  I attempt to boot an instance from a volume:

  nova boot --nic net-id=[NET ID] --flavor v.512mb --block-device
  source=volume,dest=volume,id=[VOLUME
  ID],bus=virtio,device=vda,bootindex=0,shutdown=preserve vm

  This results in nova-api raising a FlavorDiskTooSmall exception in the
  "_check_requested_image" function in compute/api.py. However,
  according to [1], the root disk limit should not apply to volumes.

  [1] http://docs.openstack.org/admin-guide-cloud/content/customize-
  flavors.html

  Log (first line is debug output I added showing that it's looking at
  the image that the volume was created from):

  2015-05-21 10:28:00.586 25835 INFO nova.compute.api 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] image: {'min_disk': 0, 'status': 
'active', 'min_ram': 0, 'properties': {u'container_format': u'bare', 
u'min_ram': u'0', u'disk_format': u'qcow2', u'image_name': u'Ubuntu 14.04 
64-bit', u'image_id': u'cf0dffef-30ef-4032-add0-516e88048d85', 
u'libvirt_cpu_mode': u'host-passthrough', u'checksum': 
u'76a965427d2866f006ddd2aac66ed5b9', u'min_disk': u'0', u'size': u'255524864'}, 
'size': 21474836480}
  2015-05-21 10:28:00.587 25835 INFO nova.api.openstack.wsgi 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] HTTP exception thrown: Flavor's disk is 
too small for requested image.

  Temporary solution: I have special flavor for volume-backed instances so I 
just set the root disk on those to 0, but this doesn't work if volume are used 
on other flavors.
  Reproduce: create flavor with 1 GB root disk size, then try to boot an 
instance from a volume created from an image that is larger than 1 GB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1203413] Re: VM launch fails with Neutron in "admin" tenant if "admin" and "demo" tenants have secgroups with a same name "web"

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New => Fix Committed

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1203413

Title:
  VM launch fails with Neutron in "admin" tenant if "admin" and "demo"
  tenants have secgroups with a same name "web"

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Using Grizzly with Neutron: If there are multiple security groups with
  the same name (in other tenants for example), it is not possible to
  boot an instance with this security group as Horizon will only use the
  name of the security group.

  Example from logs:
  2013-07-21 03:39:12.432 ERROR nova.network.security_group.quantum_driver 
[req-aaca5681-72b8-41dc-a89c-9a5c95c7eff4 33fe423e114c4586a573514b3e98341e 
e91fe07ea4834f8487c5cec7deaa2eac] Quantum Error: Multiple security_group 
matches found for name 'web', use an ID to be more specific.
  2013-07-21 03:39:12.439 ERROR nova.api.openstack 
[req-aaca5681-72b8-41dc-a89c-9a5c95c7eff4 33fe423e114c4586a573514b3e98341e 
e91fe07ea4834f8487c5cec7deaa2eac] Caught error: Multiple security_group matches 
found for name 'web', use an ID to be more specific.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1203413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482354] Re: Setting "enable_quotas"=False disables Neutron in GUI

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

** Changed in: horizon/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482354

Title:
  Setting "enable_quotas"=False disables Neutron in  GUI

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  Excluding OPENSTACK_NEUTRON_NETWORK["enable_quotas"] or setting to False
  will result in Create Network, Create Subnet, Create Router buttons
  not showing up when logged in as the demo account. KeyError Exceptions are 
  thrown.

  These three side effects happen because the code in the views use the
  following construct
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/networks/tables.py#L94

  usages = quotas.tenant_quota_usages(request)
  if usages['networks']['available'] <= 0:

  if enable_quotas is false, then quotas.tenant_quota_usages does not
  add the 'available' node to the usages dict and therefore a KeyError
  'available' is thrown. This ends up aborting the whole is_allowed
  method in horizon.BaseTable and therefore hiding the button.

  quotas.tenant_quota_usages will not add the available key for usage
  items which are disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474618] Re: N1KV network and port creates failing from dashboard

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

** Changed in: horizon/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474618

Title:
  N1KV network and port creates failing from dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  Due to the change in name of the "profile" attribute in Neutron
  attribute extensions for networks and ports, network and port
  creations fail from the dashboard since dashboard is still using
  "n1kv:profile_id" rather than "n1kv:profile".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470690] Re: No 'OS-EXT-VIF-NET' extension in v2.1

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470690

Title:
  No 'OS-EXT-VIF-NET' extension in v2.1

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  V2 APi has extension for virtual interface 'OS-EXT-VIF-NET' but it is
  not present in v2.1 API.

  Because of this there is difference between v2 and v2.1 response of
  virtual interface API.

  v2 List virtual interface Response (with all extension enable)

  {
  "virtual_interfaces": [
  {
  "id": "%(id)s",
  "mac_address": "%(mac_addr)s",
  "OS-EXT-VIF-NET:net_id": "%(id)s"
  }
  ]
  }

  v2.1 List virtual interface Response

  {
  "virtual_interfaces": [
  {
  "id": "%(id)s",
  "mac_address": "%(mac_addr)s"
  }
  ]
  }

  As v2.1 is released in kilo, we should backport this fix to kilo
  branch also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465446] Re: Hyper-V: After live migration succeded, the only instance dirs on the source host are not cleaned up

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465446

Title:
  Hyper-V: After live migration succeded, the only instance dirs on the
  source host are not cleaned up

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  After the instance has succesfully live migrated to a new host, the
  instance dirs on the source host should be removed. Not doing so can
  cause useless clutter and used disk on the source node. This issue is
  more notably when hundreds, thousands of instances were deployed to a
  host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467451] Re: Hyper-V: fail to detach virtual hard disks

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467451

Title:
  Hyper-V: fail to detach virtual hard disks

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Nova Hyper-V driver fails to detach  virtual hard disks when using the
  virtualizaton v1 WMI namespace.

  The reason is that it cannot find the attached resource, using the
  wrong resource object connection attribute.

  This affects Windows Server 2008 as well as Windows Server 2012 when
  the old namespace is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466305] Re: Booting from volume nolonger can be bigger that flavor size

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466305

Title:
  Booting from volume nolonger can be bigger that flavor size

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Upgrading to Juno you can no longer boot a volume that is bigger than
  the flavours disk size.

  There should be no need to take this into account when using a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459491] Re: Unexpected result when create server booted from volume

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459491

Title:
  Unexpected result when create server booted from volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Enviroment:
  flalvor :1 --- 1G disk.
  volume :aaa ,---2G,bootable,image:bbb,
  image:bbb 13M

  when boot from volume like this:
  nova boot --flavor 1 --nic net-id=xxx  --boot-volume aaa
  it will rasie an error: FlavorDiskTooSmall

  when boot from volum like this:
  nova boot --flavor 1 --nic net-id=xxx --block-device 
id=bbb,source=image,dest=volume,size=2,bootindex=0 test2
  it goes well.

  But,the second one is same with the first one.So,either the first or
  the second is unexcepted.

  I think the second one should raise 'FlavorDiskTooSmall' error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487522] Re: Objects: obj_reset_changes signature doesn't match

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487522

Title:
  Objects: obj_reset_changes signature doesn't match

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  If an object contains a Flavor object within it and obj_reset_changes
  is called with recursive=True it will fail with the following error.
  This is because Flavor.obj_reset_changes is missing the recursive
  param in it's signature.  The Instance object is also missing this
  parameter in its method.

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/objects/test_request_spec.py", line 284, in 
test_save
  req_obj.obj_reset_changes(recursive=True)
File "nova/objects/base.py", line 224, in obj_reset_changes
  value.obj_reset_changes(recursive=True)
  TypeError: obj_reset_changes() got an unexpected keyword argument 
'recursive'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463044] Re: Hyper-V: the driver fails to initialize on Windows Server 2008 R2

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463044

Title:
  Hyper-V: the driver fails to initialize on Windows Server 2008 R2

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  The Hyper-V driver uses the Microsoft\Windows\SMB WMI namespace in
  order to handle SMB shares. The issue is that this namespace is not
  available on Windows versions prior to Windows Server 2012.

  For this reason, the Hyper-V driver fails to initialize on Windows
  Server 2008 R2.

  Trace: http://paste.openstack.org/show/271422/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474228] Re: inline edit failed in user table because description doesn't exists

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New => Fix Committed

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474228

Title:
  inline edit failed in user table because description doesn't exists

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  inline edit failed in user table because description doesn't exits

  Environment:
  ubuntu devstack stable/kilo

  horizon commit id: c2b543bb8f3adb465bb7e8b3774b3dd1d5d999f6
  keystone commit id: 8125a8913d233f3da0eaacd09aa8e0b794ea98cb

  $keystone --version
  
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/keystoneclient/shell.py:64:
 DeprecationWarning: The keystone CLI is deprecated in favor of 
python-openstackclient. For a Python library, continue using 
python-keystoneclient.
    'python-keystoneclient.', DeprecationWarning)
  1.6.0

  
  How to reproduce the bug:

  
  1. create a new user. (important)
  2. Try to edit user using inline edit.

  
  Note: 

  If you edit the user using inline edit and the user was edited by
  update user form ever, the exception will not raise because the update
  form set description to empty string.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/forms.py#L195

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/forms.py#L228

  
  Traceback:
  Internal Server Error: /identity/users/
  Traceback (most recent call last):
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/tables/views.py", line 224, in post
  return self.get(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/tables/views.py", line 160, in get
  handled = self.construct_tables()
    File "/home/user/github/horizon/horizon/tables/views.py", line 145, in 
construct_tables
  preempted = table.maybe_preempt()
    File "/home/user/github/horizon/horizon/tables/base.py", line 1533, in 
maybe_preempt
  new_row)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1585, in 
inline_edit_handle
  error = exceptions.handle(request, ignore=True)
    File "/home/user/github/horizon/horizon/exceptions.py", line 361, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1580, in 
inline_edit_handle
  cell_name)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1606, in 
inline_update_action
  self.request, datum, obj_id, cell_name, new_cell_value)
    File "/home/user/github/horizon/horizon/tables/actions.py", line 952, in 
action
  self.update_cell(request, datum, obj_id, cell_name, new_cell_value)
    File 
"/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py",
 line 210, in update_cell
  horizon_exceptions.handle(request, ignore=True)
    File "/home/user/github/horizon/horizon/exceptions.py", line 361, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File 
"/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py",
 line 200, in update_cell
  description=user_obj.description,
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/keystoneclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: description

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381413] Re: Switch Region dropdown doesn't work

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New => Fix Committed

** Changed in: horizon/kilo
Milestone: None => ongoing

** Changed in: horizon/kilo
Milestone: ongoing => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381413

Title:
  Switch Region dropdown doesn't work

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  In case Horizon was set up to work with multiple regions (by editing
  AVAILABLE_REGIONS in settings.py), region selector drop-down appears
  in top right corner. But it doesn't work now.

  Suppose I login into the Region1, then if I try to switch to Region2,
  it redirects me to the login view of django-openstack-auth
  
https://github.com/openstack/horizon/blob/2014.2.rc1/horizon/templates/horizon/common/_region_selector.html#L11

  There I am being immediately redirected to the
  settings.LOGIN_REDIRECT_URL because I am already authenticated at
  Region1, so I cannot view Region2 resources if I switch to it via top
  right dropdown. Selecting region at login page works though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1381413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436187] Re: 'AttributeError' is getting raised while unshelving instance booted from volume

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436187

Title:
  'AttributeError' is getting raised while unshelving instance booted
  from volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  ‘AttributeError’ exception is getting raised while unshelving instance
  which is booted from volume

  Steps to reproduce:
  
  1.Create bootable volume
  2.Create instance from bootable volume
  3.Shelve instance
  4.Try to unshelve instance

  Error log on n-cpu service:
  ---

  2015-03-24 23:32:13.646 ERROR nova.compute.manager 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] Instance failed to spawn
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] Traceback (most recent call last):
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4368, in _unshelve_instance
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] block_device_info=block_device_info)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2342, in spawn
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] block_device_info)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/blockinfo.py", line 622, in get_disk_info
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] instance=instance)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/blockinfo.py", line 232, in 
get_disk_bus_for_device_type
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] disk_bus = 
image_meta.get('properties', {}).get(key)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] AttributeError: 'NoneType' object has no 
attribute 'get'
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]
  2015-03-24 23:32:13.649 DEBUG oslo_concurrency.lockutils 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] Lock 
"183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a" released by "do_unshelve_instance" :: 
held 1.182s from (pid=11271) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:456
  2015-03-24 23:32:13.650 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
9c227430eaf34c64b94f36661ef2ec8f from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.650 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
7329362a2cab48968ce31760bcac8628. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.696 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
d2388c787036413a9bcf95f55e38027b from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.696 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
c466ff4a11574ff3a1032e85f3d9bd87. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.746 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
0367b08fd7dd428ab8ef494bb42f1499 from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.746 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
b7395e5e66da4a47ba4132568713d4c4. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.789 DEBUG nova.openstack.common.periodic_task 
[req-db2bb34f-1f3d-4ac0-99d0-f6fe78f8393d None None] Running periodic task 
ComputeManager._poll_volume_usage from (pid=11271) 

[Yahoo-eng-team] [Bug 1459917] Re: Can't boot with bdm when use image in local

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459917

Title:
  Can't boot with bdm when use image in local

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  when boot vm with bdm like this:

  nova  boot --flavor 1 --nic net-id= --image 
  --block-device source=image,dest=local,id=,size=2,bootindex=0 test

  it raise a error:Mapping image to local is not supported.

  But in nova  it said :

    # if this bdm is generated from --image ,then
     # source_type = image and destination_type = local is allowed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461081] Re: SMBFS volume attach race condition

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461081

Title:
  SMBFS volume attach race condition

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  When the SMBFS volume backend is used and a volume is detached, the
  according SMB share is detached if no longer used.

  This can cause issues if at the same time, a different volume stored
  on the same share is being attached as the according disk image will
  not be available.

  This affects the Libvirt driver as well as the Hyper-V one. In case of
  Hyper-V, the issue can easily be fixed by using the share path as a
  lock when performing attach/detach volume operations.

  Trace: http://paste.openstack.org/show/256096/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504941] [NEW] RBAC-RFE- neutron net-show command should display all tenant that using the network

2015-10-11 Thread Eran Kuris
Public bug reported:

On rdo- liberty I thested neutron rbac feature .
when network assigned to more then 1 tenant we still see one tenant in neutron 
net-show 
[root@cougar16 ~(keystone_admin)]# neutron net-show 
590ca7b9-1682-4c40-8213-02feaa7a96cc
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 590ca7b9-1682-4c40-8213-02feaa7a96cc |
| mtu   | 0|
| name  | internal_ipv4_a  |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 70   |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | 9a1a387e-88cf-484a-8b12-5a1834be0233 |
| tenant_id | fa4add4659704239b771b0bccb8b6829 |
+---+--+

this network shared in 2 tenants : 
[root@cougar16 ~(keystone_admin)]# neutron rbac-list
+--+--+
| id   | object_id|
+--+--+
| 4f1a9c9d-e820-46e4-b431-b3142c6bb245 | 818dd42f-f627-45d4-a578-dd475b9e19e4 |
| 8c995ab1-dea6-411b-854c-a405cf5365fa | 590ca7b9-1682-4c40-8213-02feaa7a96cc |
| abb375b9-95d0-4297-80f1-3f22f0f84a9e | b071a769-0d50-4d25-8730-fed3dea13a2f |
| f3122b92-f47a-4a0f-a422-c9f7ed482341 | 590ca7b9-1682-4c40-8213-02feaa7a96cc |


[root@cougar16 ~(keystone_admin)]# rpm -qa |grep neutron 
python-neutronclient-3.1.1-dev1.el7.centos.noarch
python-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch
openstack-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch
openstack-neutron-ml2-7.0.0.0-rc2.dev21.el7.centos.noarch
openstack-neutron-common-7.0.0.0-rc2.dev21.el7.centos.noarch
openstack-neutron-openvswitch-7.0.0.0-rc2.dev21.el7.centos.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504941

Title:
  RBAC-RFE- neutron net-show command should display all tenant that
  using the network

Status in neutron:
  New

Bug description:
  On rdo- liberty I thested neutron rbac feature .
  when network assigned to more then 1 tenant we still see one tenant in 
neutron net-show 
  [root@cougar16 ~(keystone_admin)]# neutron net-show 
590ca7b9-1682-4c40-8213-02feaa7a96cc
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 590ca7b9-1682-4c40-8213-02feaa7a96cc |
  | mtu   | 0|
  | name  | internal_ipv4_a  |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 70   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 9a1a387e-88cf-484a-8b12-5a1834be0233 |
  | tenant_id | fa4add4659704239b771b0bccb8b6829 |
  +---+--+

  this network shared in 2 tenants : 
  [root@cougar16 ~(keystone_admin)]# neutron rbac-list
  
+--+--+
  | id   | object_id
|
  
+--+--+
  | 4f1a9c9d-e820-46e4-b431-b3142c6bb245 | 818dd42f-f627-45d4-a578-dd475b9e19e4 
|
  | 8c995ab1-dea6-411b-854c-a405cf5365fa | 590ca7b9-1682-4c40-8213-02feaa7a96cc 
|
  | abb375b9-95d0-4297-80f1-3f22f0f84a9e | b071a769-0d50-4d25-8730-fed3dea13a2f 
|
  | f3122b92-f47a-4a0f-a422-c9f7ed482341 | 590ca7b9-1682-4c40-8213-02feaa7a96cc 
|

  
  [root@cougar16 ~(keystone_admin)]# rpm -qa |grep neutron 
  python-neutronclient-3.1.1-dev1.el7.centos.noarch
  python-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch
  

[Yahoo-eng-team] [Bug 1475411] Re: During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins 
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge "Port crypto to Python 3"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442787] Re: Mapping openstack_user attribute in k2k assertions with different domains

2015-10-11 Thread Chuck Short
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
   Status: New => Fix Committed

** Changed in: keystone/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1442787

Title:
  Mapping openstack_user attribute in k2k assertions with different
  domains

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Committed

Bug description:
  We can have two users with the same username in different domains. So
  if we have a "User A" in "Domain X" and a "User A" in "Domain Y",
  there is no way to differ what "User A" is being used in a SAML
  assertion generated by this IdP (we have only the openstack_user
  attribute in the SAML assertion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1442787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465185] Re: No reverse match exception while try to edit the QoS spec

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

** Changed in: horizon/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1465185

Title:
  No reverse match exception while try to edit the QoS spec

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  while try to edit the QoS spec, i am getting NoReverseMatch Exception
  since the URL is wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1465185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492065] Re: Create instance testcase -- "test_launch_form_keystone_exception" broken

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New => Fix Committed

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492065

Title:
  Create instance testcase -- "test_launch_form_keystone_exception"
  broken

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  The test_launch_form_keystone_exception test method calls the handle
  method of the LaunchInstance class. Changes made to the handle method
  in [1] introduced a new neutron api call that was not being mocked
  out, causing an unexpected exception in the
  _cleanup_ports_on_failed_vm_launch function of the create_instance
  module, while running the test_launch_form_keystone_exception unit
  test

  [1] https://review.openstack.org/#/c/202347/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482657] Re: Attribute error on virtual_size

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

** Changed in: horizon/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482657

Title:
  Attribute error on virtual_size

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  Version: stable/kilo
  Run with ./run_test.py --runserver

  Running an old havana glance backend will result in an AttributeError
  since the attribute is introduced with the icehouse release. See error
  log at bottom of message. A simple check for the attribute will solve
  this issue and restore compatibility.

  Attached is a patch as proposal.

  Regards
  Christoph

  
  Error log:

  Internal Server Error: /project/instances/launch
  Traceback (most recent call last):
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 137, in get_response
  response = response.render()
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/response.py",
 line 103, in render
  self.content = self.rendered_content
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/response.py",
 line 80, in rendered_content
  content = template.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 148, in render
  return self._render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 142, in _render
  return self.nodelist.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 844, in render
  bit = self.render_node(node, context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/debug.py",
 line 80, in render_node
  return node.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/defaulttags.py",
 line 525, in render
  six.iteritems(self.extra_context))
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/defaulttags.py",
 line 524, in 
  values = dict((key, val.resolve(context)) for key, val in
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 596, in resolve
  obj = self.var.resolve(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 734, in resolve
  value = self._resolve_lookup(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 788, in _resolve_lookup
  current = current()
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 717, in 
get_entry_point
  step._verify_contributions(self.context)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 392, in 
_verify_contributions
  field = self.action.fields.get(key, None)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 368, in action
  context)
File 
"/home/coby/ao/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 147, in __init__
  request, context, *args, **kwargs)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 138, in 
__init__
  self._populate_choices(request, context)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 151, in 
_populate_choices
  bound_field.choices = meth(request, context)
File 
"/home/coby/ao/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 428, in populate_image_id_choices
  if image.virtual_size:
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: virtual_size

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490403] Re: Gate failing on test_routerrule_detail

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New => Incomplete

** Changed in: horizon/kilo
   Status: Incomplete => Fix Committed

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490403

Title:
  Gate failing on test_routerrule_detail

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  The gate/jenkins checks is currently bombing out on this error:

  ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
  --
  Traceback (most recent call last):
File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, 
in instance_stub_out
  return fn(self, *args, **kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 711, in test_routerrule_detail
  res = self._get_detail(router)
File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, 
in instance_stub_out
  return fn(self, *args, **kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 49, in _get_detail
  args=[router.id]))
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 470, in get
  **extra)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 286, in get
  return self.generic('GET', path, secure=secure, **r)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 358, in generic
  return self.request(**r)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 440, in request
  six.reraise(*exc_info)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 146, in get
  context = self.get_context_data(**kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/views.py", 
line 140, in get_context_data
  context = super(DetailView, self).get_context_data(**kwargs)
File "/home/ubuntu/horizon/horizon/tables/views.py", line 107, in 
get_context_data
  context = super(MultiTableMixin, self).get_context_data(**kwargs)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 56, in 
get_context_data
  exceptions.handle(self.request)
File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 54, in 
get_context_data
  context["tab_group"].load_tab_data()
File "/home/ubuntu/horizon/horizon/tabs/base.py", line 128, in load_tab_data
  exceptions.handle(self.request)
File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/home/ubuntu/horizon/horizon/tabs/base.py", line 125, in load_tab_data
  tab._data = tab.get_context_data(self.request)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 82, in get_context_data
  data["rulesmatrix"] = self.get_routerrulesgrid_data(rules)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 127, in get_routerrulesgrid_data
  source, target, rules))
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 159, in _get_subnet_connectivity
  if (int(dst.network) >= int(rd.broadcast) or
  TypeError: int() argument must 

[Yahoo-eng-team] [Bug 1494653] Re: Remove unnecessary 'context' parameter from quotas reserve method call

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1494653

Title:
  Remove unnecessary 'context' parameter from quotas reserve method call

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  In patch [1] 'context' parameter was removed from quota-related remotable 
method signatures.
  In patch [2] use of 'context' parameter was removed from quota-related 
remotable method calls.

  Still there are some occurrances where 'context' parameter is passed
  to quotas.reserve method which is leading to error "ValueError:
  Circular reference detected".

  For eg: while restarting nova-compute if there are any instance
  whose vm_state is 'DELETED' but that instance is not marked as deleted in db. 
In that case, while calling _init_instance method it raises below error.

  2015-09-08 00:36:34.133 ERROR nova.compute.manager 
[req-3222b8a4-0542-48cf-a2e1-c92e5fd91e5e None None] [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Failed to complete a deletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Traceback (most recent call last):
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/opt/stack/nova/nova/compute/manager.py", line 952, in _init_instance
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] self._complete_partial_deletion(context, 
instance)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/opt/stack/nova/nova/compute/manager.py", line 879, in _complete_partial_d
  eletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] bdms = 
objects.BlockDeviceMappingList.get_by_instance_uuid(
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", lin
  e 197, in wrapper
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] ctxt, self, fn.__name__, args, kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 246, in object_action
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] objmethod=objmethod, args=args, 
kwargs=kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=self.retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] timeout=timeout, retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 399, in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] msg = rpc_common.serialize_msg(msg)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/common.py", 
line 286, in serialize_msg
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] _MESSAGE_KEY: jsonutils.dumps(raw_msg)}
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] File 
"/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 
185, in dumps
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] return json.dumps(obj, default=default, 
**kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1481812] Re: nova servers pagination does not work with changes-since and deleted marker

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481812

Title:
  nova servers pagination does not work with changes-since and deleted
  marker

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  instance test1 - test6, where test2 and test5 has been deleted:

  # nova list
  
+--+---+-++-+--+
  | ID   | Name  | Status  | Task State | Power 
State | Networks |
  
+--+---+-++-+--+
  | 7e12d6a0-126f-44d0-b566-15cd5e4ab82e | test1 | SHUTOFF | -  | 
Shutdown| private=10.0.0.3 |
  | 8b35f7fb-65d0-4fc3-ac22-390743c695db | test3 | ACTIVE  | -  | 
Running | private=10.0.0.5 |
  | 2ab70dfe-2983-4886-a930-7deb15279763 | test4 | ACTIVE  | -  | 
Running | private=10.0.0.6 |
  | 489e22cf-5e22-43a4-8c46-438f62d66e59 | test6 | ACTIVE  | -  | 
Running | private=10.0.0.8 |
  
+--+---+-++-+--+

  # Get all instances with changes-since=2015-01-01 :
  # curl -s -H "X-Auth-Token:92ecba357e5b49f88a21cedfa63bf36e" 
'http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01';
  {"servers": [{"id": "489e22cf-5e22-43a4-8c46-438f62d66e59", "links": 
[{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59;,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59;,
 "rel": "bookmark"}], "name": "test6"}, {"id": 
"9bda60eb-6ff7-4b84-b081-0120b62155a3", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3;,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3;,
 "rel": "bookmark"}], "name": "test5"}, {"id": 
"2ab70dfe-2983-4886-a930-7deb15279763", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/2ab70dfe-2983-4886-a930-7deb15279763;,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/2ab70dfe-2983-4886-a
 930-7deb15279763", "rel": "bookmark"}], "name": "test4"}, {"id": 
"8b35f7fb-65d0-4fc3-ac22-390743c695db", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/8b35f7fb-65d0-4fc3-ac22-390743c695db;,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/8b35f7fb-65d0-4fc3-ac22-390743c695db;,
 "rel": "bookmark"}], "name": "test3"}, {"id": 
"18d9ffbb-e1d4-4218-bb66-f792aab4e091", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/18d9ffbb-e1d4-4218-bb66-f792aab4e091;,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/18d9ffbb-e1d4-4218-bb66-f792aab4e091;,
 "rel": "bookmark"}], "name": "test2"}, {"id": 
"7e12d6a0-126f-44d0-b566-15cd5e4ab82e", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/7e12d6a0-126f-44d0-b566-15cd5e4ab82e;,
 "rel": "self"}, {"href": "http://10.10.180.210:8774/30d2b54aa8f64bc2a
 1577c992c16271a/servers/7e12d6a0-126f-44d0-b566-15cd5e4ab82e", "rel": 
"bookmark"}], "name": "test1"}]}

  # query the instances in junks of 2 with changes-since and limit=2

  # curl -s -H "X-Auth-Token:92ecba357e5b49f88a21cedfa63bf36e" 
'http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01=2';
  {"servers_links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01=2=9bda60eb-6ff7-4b84-b081-0120b62155a3;,
 "rel": "next"}], "servers": [{"id": "489e22cf-5e22-43a4-8c46-438f62d66e59", 
"links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59;,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59;,
 "rel": "bookmark"}], "name": "test6"}, {"id": 
"9bda60eb-6ff7-4b84-b081-0120b62155a3", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3;,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3;,
 "rel": 

[Yahoo-eng-team] [Bug 1491511] Re: Behavior change with latest nova paste config

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491511

Title:
  Behavior change with latest nova paste config

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  http://logs.openstack.org/55/219655/1/check/gate-shade-dsvm-
  functional-nova/1154770/console.html#_2015-09-02_12_10_56_113

  This started failing about 12 hours ago. Looking at it with Sean, we
  think it's because it actually never worked, but nova was failing
  silent before. It's not throwing an error, which wile more correcter
  (you know you didn't delete something) is a behavior change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493694] Re: On compute restart, quotas are not updated when instance vm_state is 'DELETED' but instance is not destroyed in db

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493694

Title:
  On compute restart, quotas are not updated when instance vm_state is
  'DELETED' but instance is not destroyed in db

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  This is a timing issue and can occur if instance delete call reaches
  to _delete_instance method in nova/compute/manager.py module and nova-
  compute crashes after setting instance vm_state to 'DELETED' but
  before destroying the instance in db.

  Now on restarting nova-compute service, _init_instance method call
  checks whether instance vm_state is 'DELETED' or not, if yes, then it
  tries to call _complete_partial_deletion method and destroys instance
  in db then raises "ValueError: Circular reference detected" and quota
  was not updated for that instance which is not as expected.

  Steps to reproduce:
  1) Put a break point in nova/compute/manager.py module in _delete_instance 
method, just after updating instance vm_state to 'DELETED' but before 
destroying instance in db.
  2) Create instance and wait until instance vm_state become 'ACTIVE'.
  $ nova boot --image  --flavor  

  3) Send request to delete instance.
  $ nova delete 

  4) When delete request reaches to break point in nova-compute, make sure 
instance vm_state is marked as 'DELETED' and stop the nova-compute service.
  5) Restart nova-compute service and in _init_instance method call below error 
(ValueError: Circular reference detected) will be raised and instance will be 
marked as deleted in db but quota for that instance will never be updated.

  2015-09-08 00:36:34.133 ERROR nova.compute.manager 
[req-3222b8a4-0542-48cf-a2e1-c92e5fd91e5e None None] [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Failed to complete a deletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] Traceback (most recent call last):
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/opt/stack/nova/nova/compute/manager.py", line 952, in _init_instance
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] 
self._complete_partial_deletion(context, instance)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/opt/stack/nova/nova/compute/manager.py", line 879, in _complete_partial_d
  eletion
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] bdms = 
objects.BlockDeviceMappingList.get_by_instance_uuid(
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", lin
  e 197, in wrapper
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] ctxt, self, fn.__name__, args, kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 246, in object_action
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] objmethod=objmethod, args=args, 
kwargs=kwargs)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=self.retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] timeout=timeout, retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536] retry=retry)
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 
00c7a9ae-bff1-461f-ab95-0e8f15327536]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 399, in _send
  2015-09-08 00:36:34.133 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1504939] [NEW] Instance failed to spawn with qemu

2015-10-11 Thread Mingyu Li
Public bug reported:

I installed Kilo following the instructions on 
http://docs.openstack.org/kilo/install-guide/install/apt/content/
It failed to spawn an instance with qemu.

The nova.conf on compute node was as following:

root@cmp1:~# cat /etc/nova/nova.conf 
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
log_dir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata

rpc_backend=rabbit

auth_strategy = keystone

my_ip = 192.168.201.102
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.201.102
novncproxy_base_url = http://ctl:6080/vnc_auto.html

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

compute_driver = libvirt.LibvirtDriver

[glance]
host = ctl

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[keystone_authtoken]
auth_uri = http://ctl:5000
auth_url = http://ctl:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 1


[oslo_messaging_rabbit]
rabbit_host = ctl
rabbit_userid = openstack
rabbit_password = 1

[libvirt]
virt_type = qemu


[neutron]
url = http://ctl:9696
auth_strategy = keystone
admin_auth_url = http://ctl:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = 1


The error logs were as following:

2015-10-11 20:58:13.239 31702 ERROR nova.virt.libvirt.driver 
[req-5e9ccf17-f58a-4d4b-82ba-e028da6daff2 311571301159432787db5e3ed1078ee8 
8fab0e9be9164f69a2d14ac383175cdc - - -] Error defining a domain with XML: 

  f530cc86-ee2e-4ef4-8655-83f60bfed7fa
  instance-0006
  65536
  1
  
http://openstack.org/xmlns/libvirt/nova/1.0;>
  
  an-instance
  2015-10-11 12:58:12
  
64
0
0
0
1
  
  
administrator
training
  
  

  
  

  OpenStack Foundation
  OpenStack Nova
  2015.1.1
  564db6eb-b1e8-9c2d-85a4-f5fdc0775533
  f530cc86-ee2e-4ef4-8655-83f60bfed7fa

  
  
hvm


  
  


  
  
1024
  
  



  
  

  
  

  
  
  


  
  
  
  


  





  


  

  


2015-10-11 20:58:13.240 31702 ERROR nova.compute.manager 
[req-5e9ccf17-f58a-4d4b-82ba-e028da6daff2 311571301159432787db5e3ed1078ee8 
8fab0e9be9164f69a2d14ac383175cdc - - -] [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa] Instance failed to spawn
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa] Traceback (most recent call last):
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2461, in 
_build_resources
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa] yield resources
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2333, in 
_build_and_run_instance
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa] block_device_info=block_device_info)
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2385, in 
spawn
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa] block_device_info=block_device_info)
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4403, in 
_create_domain_and_network
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa] power_on=power_on)
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4334, in 
_create_domain
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa] LOG.error(err)
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 
f530cc86-ee2e-4ef4-8655-83f60bfed7fa]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-10-11 20:58:13.240 31702 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1495388] Re: The instance hostname didn't match the RFC 952 and 1123's definition

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Incomplete

** Changed in: nova/kilo
   Status: Incomplete => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495388

Title:
  The instance hostname didn't match the RFC 952 and 1123's definition

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  The instance hostname is convert from instance's name. There is method
  used to do that
  https://github.com/openstack/nova/blob/master/nova/utils.py#L774

  But looks like this method didn't match all the cases described in the
  RFC

  For example, if the host name just one character, like 'A', this
  method return 'A‘ also, this isn't allowed by RFC.

  And the hostname was updated at wrong place: 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L641
  It just update the instance db entry again after instance entry creation. We 
can populate the hostname before instance creation, then we can save one db 
operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504942] [NEW] no support for rbac in horizon

2015-10-11 Thread Eran Kuris
Public bug reported:

we need to add support for  rbac feature support in horizon- GUI
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html


bug in version :
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1504942

Title:
  no support for rbac in horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  we need to add support for  rbac feature support in horizon- GUI
  
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html

  
  bug in version :
  
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1504942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454149] Re: self._event is None that causes "AttributeError: 'NoneType' object has no attribute 'pop'"

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => 2015.1.2

** Changed in: nova/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454149

Title:
  self._event is None that causes "AttributeError: 'NoneType' object has
  no attribute 'pop'"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Here is the log:
  2015-05-11 17:17:50.655 14671 ERROR nova.compute.manager 
[req-ed95e1f2-11d3-404c-ac78-8c1d5e24bfbf ff514b152688486b9dd9752b3d67fa78 
689d7e1036e64e0fbf7fd8b4f51d2e57 - - -] [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] Setting instance vm_state to ERROR
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] Traceback (most recent call last):
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2784, in 
do_terminate_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] self._delete_instance(context, 
instance, bdms, quotas)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] rv = f(*args, **kwargs)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2753, in 
_delete_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] quotas.rollback()
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] six.reraise(self.type_, self.value, 
self.tb)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2716, in 
_delete_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] events = 
self.instance_events.clear_events_for_instance(instance)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 562, in 
clear_events_for_instance
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] return _clear_events()
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in 
inner
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] return f(*args, **kwargs)
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 561, in 
_clear_events
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] return 
self._events.pop(instance.uuid, {})
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c] AttributeError: 'NoneType' object has no 
attribute 'pop'
  2015-05-11 17:17:50.655 14671 TRACE nova.compute.manager [instance: 
b83786e1-a222-4409-8d46-65c08c70fa5c]

  is there anyway to avoid self._events to be not "None"? This might be
  a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474512] Re: STATIC_URL statically defined for stack graphics

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

** Changed in: horizon/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474512

Title:
  STATIC_URL statically defined for stack graphics

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  The svg and gif images are still using '/static/' as the base url.
  Since WEBROOT is configurable and STATIC_URL is as well. This is needs
  to be fixed or the images won't be found when either has been set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288039] Re: live-migration cinder boot volume target_lun id incorrect

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288039

Title:
  live-migration cinder boot volume target_lun id incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  When nova goes to cleanup _post_live_migration on the source host, the
  block_device_mapping has incorrect data.

  I can reproduce this 100% of the time with a cinder iSCSI backend,
  such as 3PAR.

  This is a Fresh install on 2 new servers with no attached storage from Cinder 
and no VMs.
  I create a cinder volume from an image. 
  I create a VM booted from that Cinder volume.  That vm shows up on host1 with 
a LUN id of 0.
  I live migrate that vm.   The vm moves to host 2 and has a LUN id of 0.   The 
LUN on host1 is now gone.

  I create another cinder volume from image.
  I create another VM booted from the 2nd cinder volume.  The vm shows up on 
host1 with a LUN id of 0.  
  I live migrate that vm.  The VM moves to host 2 and has a LUN id of 1.  
  _post_live_migrate is called on host1 to clean up, and gets failures, because 
it's asking cinder to delete the volume
  on host1 with a target_lun id of 1, which doesn't exist.  It's supposed to be 
asking cinder to detach LUN 0.

  First migrate
  HOST2
  2014-03-04 19:02:07.870 WARNING nova.compute.manager 
[req-24521cb1-8719-4bc5-b488-73a4980d7110 admin admin] pre_live_migrate: 
{'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 
'mount_device': u'vda', 'connection_info': {u'd
  river_volume_type': u'iscsi', 'serial': 
u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260'
  , u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 
'device_type': u'disk', 'delete_on_termination': False}]}
  HOST1
  2014-03-04 19:02:16.775 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi',
   u'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': 
{u'target_discovered': True, u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}



  Second Migration
  This is in _post_live_migration on the host1.  It calls libvirt's driver.py 
post_live_migration with the volume information returned from the new volume on 
host2, hence the target_lun = 1.   It should be calling libvirt's driver.py to 
clean up the original volume on the source host, which has a target_lun = 0.
  2014-03-04 19:24:51.626 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi', u'serial': 
u'f0087595-804d-4bdb-9bad-0da2166313ea', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 1, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423772] Re: During live-migration Nova expects identical IQN from attached volume(s)

2015-10-11 Thread Chuck Short
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423772

Title:
  During live-migration Nova expects identical IQN from attached
  volume(s)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  When attempting to do a live-migration on an instance with one or more
  attached volumes, Nova expects that the IQN will be exactly the same
  as it's attaching the volume(s) to the new host. This conflicts with
  the Cinder settings such as "hp3par_iscsi_ips" which allows for
  multiple IPs for the purpose of load balancing.

  Example:
  An instance on Host A has a volume attached at 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  An attempt is made to migrate the instance to Host B.
  Cinder sends the request to attach the volume to the new host.
  Cinder gives the new host 
"/dev/disk/by-path/ip-10.10.120.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  Nova looks for the volume on the new host at the old location 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"

  The following error appears in n-cpu in this case:

  2015-02-19 17:09:05.574 ERROR nova.virt.libvirt.driver [-] [instance: 
b6fa616f-4e78-42b1-a747-9d081a4701df] Live Migration failure: Failed to open 
file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5426, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5393, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1582, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Removing descriptor: 3

  
  When looking at the nova DB, this is the state of block_device_mapping prior 
to the migration attempt:

  mysql> select * from block_device_mapping where 
instance_uuid='b6fa616f-4e78-42b1-a747-9d081a4701df' and deleted=0;
  
+-+-+++-+---+-+--+-+---+---+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id | device_name | 
delete_on_termination | snapshot_id | volume_id| 
volume_size | no_device | connection_info   



   

[Yahoo-eng-team] [Bug 1423453] Re: Delete ports when Launching VM fails when plugin is N1K

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New => Fix Committed

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1423453

Title:
  Delete ports when Launching VM fails when plugin is N1K

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  When plugin is Cisco N1KV, ports gets created before launching VM instance.
  But upon failure of launching, the ports are not cleaned up in the except 
block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1423453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467935] Re: widget attributes changed

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New => Fix Committed

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467935

Title:
  widget attributes changed

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
   In Django 1.8, widget attribute data-date-picker=True will be
  rendered as 'data-date-picker'. To preserve current behavior, use the
  string 'True' instead of the boolean value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474241] Re: Need a way to disable simple tenant usage

2015-10-11 Thread Chuck Short
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
Milestone: None => 2015.1.2

** Changed in: horizon/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474241

Title:
  Need a way to disable simple tenant usage

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  Frequent calls to Nova's API when displaying the simple tenant usage
  can lead to efficiency problems and even crash on the Nova side,
  especially when there are a lot of deleted nodes in the database. We
  are working on resolving that, but in the mean time, it would be nice
  to have a way of disabling the simple tenant usage stats on the
  Horizon side as a workaround.

  Horizon enabled that option depending on whether it's supported on the
  Nova side. In the 2.0 version of API we can simply disable the support
  for it on the Nova side, but that won't be possible in version 2.1
  anymore, so we need a configuration option on the Horizon side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274034] Re: Neutron firewall anti-spoofing does not prevent ARP poisoning

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274034

Title:
  Neutron firewall anti-spoofing does not prevent ARP poisoning

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed
Status in OpenStack Security Advisory:
  Invalid
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache 
poisoning.
  When anti-spoofing rules are handled by Nova, a list of rules was added 
through the libvirt network filter feature:
  - no-mac-spoofing
  - no-ip-spoofing
  - no-arp-spoofing
  - nova-no-nd-reflection
  - allow-dhcp-server

  Actually, the neutron firewall driver 'iptabes_firawall' handles only
  MAC and IP anti-spoofing rules.

  This is a security vulnerability, especially on shared networks.

  Reproduce an ARP cache poisoning and man in the middle:
  - Create a private network/subnet 10.0.0.0/24
  - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 
10.0.0.4)
  - Log on VM1 and install ettercap [1]
  - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:'
  - Log on too on VM2 (with VNC/spice console) and ping google.fr => ping is ok
  - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 
instead to be send directly to the network gateway and forwarded by the VM1 to 
the gw. The ICMP capture looks something like that [2]
  - Go back to VM2 and check the ARP table => the MAC address associated to the 
GW is the MAC address of VM1

  [1] http://ettercap.github.io/ettercap/
  [2] http://paste.openstack.org/show/62112/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481443] Re: Add configurability for HA networks in L3 HA

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481443

Title:
  Add configurability for HA networks in L3 HA

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  The L3 HA mechanism creates a project network for HA (VRRP) traffic
  among routers. The HA project network uses the first (default)
  network type in 'tenant_network_types' and next available segmentation
  ID. Depending on the environment, this combination may not provide a
  desirable path for HA traffic. For example, some operators may prefer
  to use a specific network for HA traffic, such that the HA networks
  will use tunneling while tenant networks use VLANs or vice versa.
  Alternatively, the physical_network tag of the HA networks may need to
  be selected so that HA networks will use a separate or different NIC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455675] Re: IptablesManager._find_last_entry taking up majority of time to plug ports

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455675

Title:
  IptablesManager._find_last_entry taking up majority of time to plug
  ports

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  During profiling of the OVS agent, I found that
  IptablesManager._find_last_entry is taking up roughly 40% of the time
  when plugging a large amount of ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477253] Re: ovs arp_responder unsuccessfully inserts IPv6 address into arp table

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477253

Title:
  ovs arp_responder unsuccessfully inserts IPv6 address into arp table

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  The ml2 openvswitch arp_responder agent attempts to install IPv6
  addresses into the OVS arp response tables. The action obviously
  fails, reporting:

  ovs-ofctl: -:4: 2001:db8::x:x:x:x invalid IP address

  The end result is that the OVS br-tun arp tables are incomplete.

  The submitted patch verifies that the address is IPv4 before
  attempting to add the address to the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477860] Re: TestAsyncProcess.test_async_process_respawns fails with TimeoutException

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477860

Title:
  TestAsyncProcess.test_async_process_respawns fails with
  TimeoutException

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hc3luY19wcm9jZXNzX3Jlc3Bhd25zXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mzc3MjMxNTU2ODB9

  fails for both feature/qos and master:

  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.369 | Captured traceback:
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.370 | ~~~
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.371 | Traceback (most 
recent call last):
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.372 |   File 
"neutron/tests/functional/agent/linux/test_async_process.py", line 70, in 
test_async_process_respawns
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.373 | 
proc._kill_process(proc.pid)
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.375 |   File 
"neutron/agent/linux/async_process.py", line 177, in _kill_process
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.376 | 
self._process.wait()
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.377 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 75, in wait
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.378 | 
eventlet.sleep(check_interval)
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.379 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 34, in sleep
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.380 | hub.switch()
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.381 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.382 | return 
self.greenlet.switch()
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.383 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.384 | 
self.wait(sleep_time)
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.385 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.387 | presult = 
self.do_poll(seconds)
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.388 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
 line 62, in do_poll
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.389 | return 
self.poll.poll(seconds)
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.390 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.391 | raise 
TimeoutException()
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.392 | 
fixtures._fixtures.timeout.TimeoutException

  Example: http://logs.openstack.org/64/199164/2/check/gate-neutron-
  dsvm-functional/9b43ead/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466642] Re: Intermittent failure in AgentManagementTestJSON.test_list_agent

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466642

Title:
  Intermittent failure in AgentManagementTestJSON.test_list_agent

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  This failure is fairly rare (6 occurrences in 48 hours):
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy5hcGkuYWRtaW4udGVzdF9hZ2VudF9tYW5hZ2VtZW50LkFnZW50TWFuYWdlbWVudFRlc3RKU09OLnRlc3RfbGlzdF9hZ2VudFwiIEFORCBtZXNzYWdlOlwiRkFJTEVEXCIgbWVzc2FnZTpcIm5ldXRyb24udGVzdHMuYXBpLmFkbWluLnRlc3RfYWdlbnRfbWFuYWdlbWVudC5BZ2VudE1hbmFnZW1lbnRUZXN0SlNPTi50ZXN0X2xpc3RfYWdlbnRcIiBBTkQgbWVzc2FnZTpcIkZBSUxFRFwiIEFORCB0YWdzOmNvbnNvbGUuaHRtbCIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNDY1ODM3MzIxMn0=

  Query:
  
message:"neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent"
  AND message:"FAILED"
  
message:"neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent"
  AND message:"FAILED" AND tags:console.html

  the failure itself is rather silly. The test expects description to be
  None, whereas it is an empty string ->
  http://logs.openstack.org/08/188608/6/check/check-neutron-dsvm-
  api/fea6d1d/console.html#_2015-06-18_14_32_40_302

  Note: it looks similar to 1442494 but the failure mode is quite
  different.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411163] Re: No fdb entries added when failover dhcp and l3 agent together

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411163

Title:
  No fdb entries added when failover dhcp and l3 agent together

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  [Env]

  OpenStack: icehouse
  OS: ubuntu
  enable l2 population
  enable gre tunnel

  [Description]
  If the dhcp and l3 agent on the same host, then after this host is down, then 
there will be a probability that scheduled to other same host, then sometimes 
the ovs tunnel can't be created on the new scheduled host.

  [Root cause]
  After debugging, we found below log:
  2015-01-14 13:44:18.284 9815 INFO neutron.plugins.ml2.drivers.l2pop.db 
[req-e36fe1fe-a08c-43c9-9d9c-75fe714d6f91 None] query:[, ]

  Above shows there will be a probability that two ACTIVE ports shows up in db 
together, but from l2 pop mech_driver:
  "
  if agent_active_ports == 1 or (
  self.get_agent_uptime(agent) < 
cfg.CONF.l2pop.agent_boot_time):
  "
  only in above condition the fdb entry will be added and notified to agent, so 
failures are pop up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460741] Re: security groups iptables can block legitimate traffic as INVALID

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460741

Title:
  security groups iptables can block legitimate traffic as INVALID

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  The iptables implementation of security groups includes a default rule
  to drop any INVALID packets (according to the Linux connection state
  tracking system.)  It looks like this:

  -A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

  This is placed near the top of the rule stack, before any security
  group rules added by the user.  See:

  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

  However, there are some cases where you would not want traffic marked
  as INVALID to be dropped here.  Specifically, our use case:

  We have a load balancing scheme where requests from the LB are
  tunneled as IP-in-IP encapsulation between the LB and the VM.
  Response traffic is configured for DSR, so the responses go directly
  out the default gateway of the VM.

  The results of this are iptables on the hypervisor does not see the
  initial SYN from the LB to VM (because it is encapsulated in IP-in-
  IP), and thus it does not make it into the connection table.  The
  response that comes out of the VM (not encapsulated) hits iptables on
  the hypervisor and is dropped as invalid.

  I'd like to see a Neutron option to enable/disable the population of
  this INVALID state rule, so that operators (such as us) can disable it
  if desired.  Obviously it's better in general to keep it in there to
  drop invalid packets, but there are cases where you would like to not
  do this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499054] Re: devstack VMs are not booting

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499054

Title:
  devstack VMs are not booting

Status in Ironic:
  Invalid
Status in Ironic Inspector:
  Invalid
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  In devstack, VMs are failing to boot the deploy ramdisk consistently.
  It appears ipxe is failing to configure the NIC, which is usually
  caused by a DHCP timeout, but can also be caused by a bug in the PXE
  ROM that chainloads to ipxe. See also http://ipxe.org/err/040ee1

  Console output:

   eaBIOS (version 1.7.4-20140219_122710-roseapple)
   achine UUID 37679b90-9a59-4a85-8665-df8267e09a3b
  M

  iPXE (http://ipxe.org) 00:04.0 CA00 PCI2.10 PnP PMM+3FFC2360+3FF22360 CA00

 

  
  Booting from ROM...
  iPXE (PCI 00:04.0) starting execution...ok
  iPXE initialising devices...ok


  iPXE 1.0.0+git-2013.c3d1e78-2ubuntu1.1 -- Open Source Network Boot 
Firmware 
  -- http://ipxe.org
  Features: HTTP HTTPS iSCSI DNS TFTP AoE bzImage ELF MBOOT PXE PXEXT Menu

  net0: 52:54:00:7c:af:9e using 82540em on PCI00:04.0 (open)
[Link:up, TX:0 TXE:0 RX:0 RXE:0]
  Configuring (net0 52:54:00:7c:af:9e).. Error 0x040ee119 
(http://
  ipxe.org/040ee119)
  No more network devices

  No bootable device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1499054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501090] Re: OVSDB wait_for_change waits for a change that has already happened

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501090

Title:
  OVSDB wait_for_change waits for a change that has already happened

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  The idlutils wait_for_change() function calls idl.run(), but doesn't
  check to see if it caused a change before calling poller.block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497074] Re: Ignore the ERROR when delete a ipset member or destroy ipset sets

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497074

Title:
  Ignore the ERROR when delete a ipset member or destroy ipset sets

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  when ovs-agent or lb-agent execute ipset command,  it will crash in
  some cases, but some actions like delete a ipset memeber or destroy
  ipset sets, these actions should not crash the l2 agent, we just need
  to log it if happen errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496974] Re: Improve performance of _get_dvr_sync_data

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496974

Title:
  Improve performance of _get_dvr_sync_data

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Today, when scheduling a router to a host, _get_dvr_sync_data makes a
  call to get all ports on that host.   This causes the time to schedule
  a new router to increase as the number of routers on the host
  increases.

  What can we do to improve performance by limiting the number of ports
  that we need to return to the agent?

  Marked high and kilo-backport-potential because the source problem is
  in an existing operator cloud running stable/kilo

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489671] Re: Neutron L3 sync_routers logic process all router ports from database when even sync for a specific router

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489671

Title:
  Neutron L3 sync_routers logic process all router ports from database
  when even sync for a specific router

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Recreate Steps:
  1) Create multiple routers and allocate each router interface for neutron 
route ports from different network.
  for example, below, there are 4 routers with each have 4,2,1,2 ports.  
(So totally 9 router ports in database)
  [root@controller ~]# neutron router-list
  
+--+---+---+-+---+
  | id   | name  | 
external_gateway_info | distributed | ha|
  
+--+---+---+-+---+
  | b2b466d2-1b1a-488d-af92-9d83d1c0f2c0 | routername1   | null 
 | False   | False |
  | 919f4312-41d1-47a8-b2b5-dc7f14d3f331 | routername2   | null 
 | False   | False |
  | 2854df21-7fe8-4968-a372-3c4a5c3d4ecf | routername3   | null 
 | False   | False |
  | daf51173-0084-4881-9ba3-0a9ac80d7d7b | routername4   | null 
 | False   | False |
  
+--+---+---+-+---+

  [root@controller ~]# neutron router-port-list routername1
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6194f014-e7c1-4d0b-835f-3cbf94839b9b |  | fa:16:3e:a9:43:7a | 
{"subnet_id": "84b1e75e-9ce3-4a85-a9c6-32133fca081d", "ip_address": "77.0.0.1"} 
|
  | bcac4f23-b74d-4cb3-8bbe-f1d59dff724f |  | fa:16:3e:72:59:a1 | 
{"subnet_id": "80dc7dfe-d353-4c51-8882-934da8bbbe8b", "ip_address": "77.1.0.1"} 
|
  | 39bb4b6c-e439-43a3-85f2-cade8bce8d3c |  | fa:16:3e:9a:65:e6 | 
{"subnet_id": "b54cb217-98b8-41e1-8b6f-fb69d84fcb56", "ip_address": "80.0.0.1"} 
|
  | 3349d441-4679-4176-9f6f-497d39b37c74 |  | fa:16:3e:eb:43:b5 | 
{"subnet_id": "8fad7ca7-ae0d-4764-92d9-a5e23e806eba", "ip_address": "81.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername2
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 77ac0964-57bf-4ed2-8822-332779e427f2 |  | fa:16:3e:ea:83:f8 | 
{"subnet_id": "2f07dbf4-9c5c-477c-b992-1d3dd284b987", "ip_address": "95.0.0.1"} 
|
  | aeeb920e-5c73-45ba-8fe9-f6dafabdab68 |  | fa:16:3e:ee:43:a8 | 
{"subnet_id": "15c55c9f-2051-4b4d-9628-552b86543e4e", "ip_address": "97.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername3
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | f792ac7d-0bdd-4dbe-bafb-7822ce388c71 |  | fa:16:3e:fe:b7:f7 | 
{"subnet_id": "b62990de-0468-4efd-adaf-d421351c6a8b", "ip_address": "66.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername4
  

[Yahoo-eng-team] [Bug 1501779] Re: Failing to delete an linux bridge causes log littering

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501779

Title:
  Failing to delete an linux bridge causes log littering

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  I saw this in some ansible jobs in the gate:

  2015-09-30 22:37:21.805 26634 ERROR
  neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
  [req-23466df3-f59e-4897-9a22-1abb7c99dfd9
  9a365636c1b44c41a9770a26ead28701 cbddab88045d45eeb3d2027a3e265b78 - -
  -] Cannot delete bridge brq33213e3f-2b, does not exist

  http://logs.openstack.org/57/227957/3/gate/gate-openstack-ansible-
  dsvm-commit/de3daa3/logs/aio1-neutron/neutron-linuxbridge-agent.log

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L533

  That should not be an ERROR message, it could be INFO at best.  If
  you're racing with RPC and a thing is already gone, which you were
  going to delete anyway, it's not an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475938] Re: create_security_group code may get into endless loop

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475938

Title:
  create_security_group code may get into endless loop

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  That damn piece of code again.

  In some cases when network is created for tenant and default security group 
is created in the process, there may be concurrent network or sg creation 
happening.
  That leads to a condition when the code fetches default sg, it's not there, 
tries to add it - it's already there, then it tries to fetch it again, but due 
to REPEATABLE READ isolation method, the query returns empty result, as in the 
first attempt.
  As a result, such logic will hang in the loop forever.

  Reproducible with rally create_and_delete_ports test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461647] Re: HA - creating an ha router fails with internal sever error

2015-10-11 Thread Chuck Short
*** This bug is a duplicate of bug 1461519 ***
https://bugs.launchpad.net/bugs/1461519

** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** No longer affects: neutron/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461647

Title:
  HA - creating an ha router fails with internal sever error

Status in neutron:
  Fix Committed

Bug description:
  When use attempts to create an ha router, the creation fails with an internal 
server error message. 
  After the first attempt, the command "neutron net-list" starts to fail with 
internal error as well. 

  The cause seems to be the creation of the ha_network. 
  After the first attempt to create the ha router, an ha_network gets created, 
then any access to that network (by net-list or attaching the router to it) 
causes an error which looks like the trace below. 

  The problem is that the ha network does not have a PortSecurityBinding
  associated with it because it is not created with the appropriate
  security_port_enabled value.

  
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 461, in create
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 376, in create_router
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
self.delete_router(context, router_dict['id'])
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 372, in create_router
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
self._create_ha_interfaces(context, router_db, ha_network)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 328, in 
_create_ha_interfaces
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
l3_port_check=False)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 322, in 
_create_ha_interfaces
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
router.tenant_id)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 303, in add_ha_port
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 'name': 
constants.HA_PORT_NAME % tenant_id}})
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1002, in create_port
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource result, 
mech_context = self._create_port_db(context, port)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 984, in _create_port_db
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource network = 
self.get_network(context, result['network_id'])
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 669, in get_network
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource result = 
super(Ml2Plugin, self).get_network(context, id, None)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1024, in get_network
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource return 
self._make_network_dict(network, fields)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 874, in 
_make_network_dict
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
attributes.NETWORKS, res, network)
  2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1357068] Re: Arping doesn't work with IPv6

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357068

Title:
  Arping doesn't work with IPv6

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Neutron tries to query if host exists by apring, but in IPv6 there is no ARP, 
but only NDP.
  Some other tool should be used, for example:  ndisc6 
(http://www.remlab.net/ndisc6/), ndp 
(http://www.freebsd.org/cgi/man.cgi?query=ndp=8). 
  RFC Neighbor Discovery for IP version 6: http://tools.ietf.org/html/rfc4861

  Neutron log:
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-dfe30f07-f4cd-47db-ac31-347b87435c83', 'arping', '-A', '-I', 
'qr-bdaba7ef-6c', '-c', '3', 'fd02::1']
  Exit code: 2
  Stdout: ''
  Stderr: 'arping: unknown host fd02::1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461519] Re: Enabling ml2 port security extension driver causes net-list to fail on existing deployment

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461519

Title:
  Enabling ml2 port security extension driver causes net-list to fail on
  existing deployment

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  I had a kilo setup where there were a few existing networks.  Then I
  enabled the port security extension driver in ml2_conf.ini.

  After this net-list fails because the extension driver tries to access
  the fields(port security related) which were never set for the old
  networks.

  This also happens when port-security is enabled and when creating an
  HA router.

  ocloud@ubuntu:~/devstack$ neutron net-list
  Request Failed: internal server error while processing your request.

  2015-06-03 17:14:44.059 ERROR neutron.api.v2.resource 
[req-d831393d-e02a-4405-8f3a-dd13291f86b1 admin admin] index failed
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 319, in index
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource return 
self._items(request, True, parent_id)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 249, in _items
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 669, in get_networks
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource limit, 
marker, page_reverse)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1020, in get_networks
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
page_reverse=page_reverse)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/common_db_mixin.py", line 184, in _get_collection
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource items = 
[dict_func(c, fields) for c in query]
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 858, in 
_make_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
attributes.NETWORKS, res, network)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/common_db_mixin.py", line 162, in 
_apply_dict_extend_functions
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource func(*args)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 477, in 
_ml2_md_extend_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
self.extension_manager.extend_network_dict(session, netdb, result)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 782, in 
extend_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
driver.obj.extend_network_dict(session, base_model, result)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_security.py", line 60, 
in extend_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
self._extend_port_security_dict(result, db_data)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_security.py", line 68, 
in _extend_port_security_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
db_data['port_security'][psec.PORTSECURITY])
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource TypeError: 
'NoneType' object has no attribute '__getitem__'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498665] Re: no dnsmasq name resolution for IPv6 addresses

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498665

Title:
  no dnsmasq name resolution for IPv6 addresses

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  The logic to prevent IPv6 entries from being entered as hosts into the
  lease DB[1] is preventing the hosts from getting name resolution from
  dnsmasq.

  1.
  
https://github.com/openstack/neutron/blob/7707cfd86f47dfc66411e274f343fe8484f9e250/neutron/agent/linux/dhcp.py#L534-L535

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489184] Re: Port is unbound from a compute node, the DVR scheduler needs to check whether the router can be deleted on the L3-agent

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489184

Title:
  Port is unbound from a compute node, the DVR scheduler needs to check
  whether the router can be deleted on the L3-agent

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  In my environment where there is a compute node and a controller node.
  On the compute node the L3-agent mode is 'dvr' on the controller node
  the L3-agent mode is 'dvr-snat'. Nova-compute is only running on the
  compute node.

  Start: the compute node has no VMs running, there are no namespaces on
  the compute node.

  1. Created a network and a router
 neutron net-create demo-net
 neutron subnet-create sb-demo-net demo-net 10.1.2.0/24
 neutron router-create demo-router
 neutron router-interface-add demo-router sb-demo-net
 neutron router-gateway-set demo-router public

  my-net's UUID is 0d3f0103-43e9-45a2-8ca2-b29700039297
  my-router's UUID is 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b

  2. Created a port: 
  stack@Dvr-Ctrl2:~/DEVSTACK/demo$ neutron port-create demo-net
  The port's UUID is 278743d7-b057-4797-8b2b-faaf5fe13a4a

  Note: the port is not associated with a floating IP.

  3. Boot up a VM using the port:
  nova boot --flavor 1 --image  --nic 
port-id=278743d7-b057-4797-8b2b-faaf5fe13a4a  demo-p11vm01

  Wait for the VM to come up on the compute node.

  4. Deleted the VM.

  5. The port still exists and is now unbound from the compute node (device 
owner and binding:host_id are now None):
  stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron port-show 
278743d7-b057-4797-8b2b-faaf5fe13a4a
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
|
  | binding:profile   | {}  
|
  | binding:vif_details   | {}  
|
  | binding:vif_type  | unbound 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {"subnet_id": 
"b45d41ca-134f-4274-bb05-50fab100315e", "ip_address": "10.1.2.4"} |
  | id| 278743d7-b057-4797-8b2b-faaf5fe13a4a
|
  | mac_address   | fa:16:3e:a6:f7:d1   
|
  | name  | 
|
  | network_id| 0d3f0103-43e9-45a2-8ca2-b29700039297
|
  | port_security_enabled | True
|
  | security_groups   | 8b68d1c9-cae7-4f0b-8fb5-6adb5a515246
|
  | status| DOWN
|
  | tenant_id | a7950bd5a61548ee8b03145cacf90a53
|
  
+---+-+

  The Router is still scheduled on the compute node.

  stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron 
l3-agent-list-hosting-router 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b
  
+--+-++---+--+
  | id   | host| admin_state_up | alive 
| ha_state |
  
+--+-++---+--+
  | 

[Yahoo-eng-team] [Bug 1493809] Re: loadbalancer V2 ports are not serviced by DVR

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493809

Title:
  loadbalancer V2 ports are not serviced by DVR

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  
  ## common/constants.py
  DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
  DEVICE_OWNER_LOADBALANCERV2 = "neutron:LOADBALANCERV2"

  
  ## common/utils.py
  def is_dvr_serviced(device_owner):
  """Check if the port need to be serviced by DVR

  Helper function to check the device owners of the
  ports in the compute and service node to make sure
  if they are required for DVR or any service directly or
  indirectly associated with DVR.
  """
  dvr_serviced_device_owners = (q_const.DEVICE_OWNER_LOADBALANCER,
q_const.DEVICE_OWNER_DHCP)
  return (device_owner.startswith('compute:') or
  device_owner in dvr_serviced_device_owners)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460562] Re: ipset can't be destroyed when last sg rule is deleted

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460562

Title:
  ipset can't be destroyed when last sg rule is deleted

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  reproduce steps:
  1. a VM A in default security group
  2. default security group has rules: 1. allow all traffic out; 2. allow it 
self as remote_group in
  3. firstly delete rule 1, then delete rule2

  I found the iptables in compute node which VM A resids didn't be
  reload, and the relevant ipset didn't be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453264] Re: iptables_manager can run very slowly when a large number of security group rules are present

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453264

Title:
  iptables_manager can run very slowly when a large number of security
  group rules are present

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  We have customers that typically add a few hundred security group
  rules or more.  We also typically run 30+ VMs per compute node.  When
  about 10+ VMs with a large SG set all get scheduled to the same node,
  the L2 agent (OVS) can spend many minutes in the
  iptables_manager.apply() code, so much so that by the time all the
  rules are updated, the VM has already tried DHCP and failed, leaving
  it in an unusable state.

  While there have been some patches that tried to address this in Juno
  and Kilo, they've either not helped as much as necessary, or broken
  SGs completely due to re-ordering the of the iptables rules.

  I've been able to show some pretty bad scaling with just a handful of
  VMs running in devstack based on today's code (May 8th, 2015) from
  upstream Openstack.

  Here's what I tested:

  1. I created a security group with 1000 TCP port rules (you could
  alternately have a smaller number of rules and more VMs, but it's
  quicker this way)

  2. I booted VMs, specifying both the default and "large" SGs, and
  timed from the second it took Neutron to "learn" about the port until
  it completed it's work

  3. I got a :( pretty quickly

  And here's some data:

  1-3 VM - didn't time, less than 20 seconds
  4th VM - 0:36
  5th VM - 0:53
  6th VM - 1:11
  7th VM - 1:25
  8th VM - 1:48
  9th VM - 2:14

  While it's busy adding the rules, the OVS agent is consuming pretty
  close to 100% of a CPU for most of this time (from top):

PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND   
  
  25767 stack 20   0  157936  76572   4416 R  89.2  0.5  50:14.28 python

  And this is with only ~10K rules at this point!  When we start
  crossing the 20K point VM boot failures start to happen.

  I'm filing this bug since we need to take a closer look at this in
  Liberty and fix it, it's been this way since Havana and needs some
  TLC.

  I've attached a simple script I've used to recreate this, and will
  start taking a look at options here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481613] Re: [DVR] DVR router do not support to update service port's arp entry after created.

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481613

Title:
  [DVR] DVR router do not support to update service port's arp entry
  after created.

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  When creating VMs,  DVR router will broadcast VM's arp details to all
  the l3 agents hosting it. that enable dvr can forwarding networking
  traffic in link layer, but when the port is attached to the service
  liking lbaas, their arp will not be broadcast, so the dvr do not know
  its mac, and will  cause that vms in other subnet can not reach the
  service port through the dvr router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490581] Re: the items will never be deleted from metering_info

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490581

Title:
  the items will never be deleted from metering_info

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  The function _purge_metering_info of MeteringAgent class has a bug. The items 
of metering_info dictionary will never be deleted:
  if info['last_update'] > ts + report_interval:
  del self.metering_info[label_id]
  I this situation last_update will always be less than current timestamp.
  Also this function is not covered by the unit tests.
  Also again, the purge_metering_info function uses metering_info dict but it 
should use the metering_infos dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494336] Re: Neutron traceback when an external network without IPv6 subnet is attached to an HA Router

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494336

Title:
  Neutron traceback when an external network without IPv6 subnet is
  attached to an HA Router

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  For an HA Router which does not have any subnets in the external network, 
Neutron 
  sets the IPv6 proc entry[1] on the gateway interface to receive Router Advts 
from 
  external IPv6 router and configure a default route pointing to the LLA of the 
external IPv6 Router.

  Normally for an HA Router in the backup state, Neutron removes Link Local 
Address (LLA)
  from the gateway interface. 

  In Kernel version 3.10 when the last IPv6 address is removed from the 
interface, 
  IPv6 is shutdown on the iface and the proc entries corresponding to the iface 
are deleted (i.e., /proc/sys/net/ipv6/conf/)
  This issue is resolved in the later kernels [2], but the issue exists on 
platforms with Kernel version 3.10
  When IPv6 proc entries are missing and Neutron tries to configure the proc 
entry we see the following traceback [3] in Neutron. 

  [1] /proc/sys/net/ipv6/conf/qg-1fc4061d-3c/accept_ra
  [2] 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=876fd05ddbae03166e7037fca957b55bb3be6594
  [3] Trace:
  Command: ['ip', 'netns', 'exec', 
'qrouter-e66b99aa-e840-4a13-9311-6242710a5452', 'sysctl', '-w', 
'net.ipv6.conf.qg-1fc4061d-3c.accept_ra=2']
  Exit code: 255
  Stdin:
  Stdout:
  Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/qg-1fc4061d-3c/accept_ra: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430394] Re: neutron port-delete operation throws HTTP 500, if port is lb-vip

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430394

Title:
  neutron port-delete operation throws HTTP 500, if  port is lb-vip

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  1. create a VIP for existed load-balancer

  # neutron lb-vip-create --name vip --protocol-port 80 --protocol HTTP
  --subnet-id  LB

  2. obtain the id of this new VIP by  neutron port-list

  # neutron port-list
  
+--+--+---+-+
  | id   | name 
| mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6bbfbc5b-93d2-4791-bb8a-ef292f04aed1 | 
vip-0093a88f-3c4c-4e84-a9d4-14e9264faa5a | fa:16:3e:7d:b9:b0 | {"subnet_id": 
"b22172b7-05ee-42b8-b3b9-48a312fdfc97", "ip_address": "192.168.10.5"} |

  3.  # neutron port-delete 6bbfbc5b-93d2-4791-bb8a-ef292f04aed1
  Request Failed: internal server error while processing your request.

  # neutron --verbose port-delete 6bbfbc5b-93d2-4791-bb8a-ef292f04aed1
  DEBUG: neutronclient.neutron.v2_0.port.DeletePort 
run(Namespace(id=u'6bbfbc5b-93d2-4791-bb8a-ef292f04aed1', 
request_format='json'))
  DEBUG: neutronclient.client
  ...
  DEBUG: neutronclient.client
  REQ: curl -i 
http://10.162.80.155:9696/v2.0/ports/6bbfbc5b-93d2-4791-bb8a-ef292f04aed1.json 
-X DELETE -H "X-Auth-Token: MIINoQYJKoZIhvcNAQcCoII..
  .Yr80gJf7djQE1JI+PA-Q==" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "User-Agent: python-neutronclient"

  DEBUG: neutronclient.client RESP:{'date': 'Tue, 10 Mar 2015 15:09:30
  GMT', 'status': '500', 'content-length': '88', 'content-type':
  'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-
  75f8c9ca-e273-4e3f-bc4d-9db7d7828794'} {"NeutronError": "Request
  Failed: internal server error while processing your request."}

  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": "Request 
Failed: internal server error while processing your request."}
  ERROR: neutronclient.shell Request Failed: internal server error while 
processing your request.
  Traceback (most recent call last):
    File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 526, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
    File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 79, in 
run_command
  return cmd.run(known_args)
    File 
"/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/__init__.py", line 
509, in run
  obj_deleter(_id)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
111, in with_params
  ret = self.function(instance, *args, **kwargs)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
326, in delete_port
  return self.delete(self.port_path % (port))
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1232, in delete
  headers=headers, params=params)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1221, in retry_request
  headers=headers, params=params)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1164, in do_request
  self._handle_fault_response(status_code, replybody)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1134, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
84, in exception_handler_v20
  message=error_dict)
  NeutronClientException: Request Failed: internal server error while 
processing your request.
  DEBUG: neutronclient.shell clean_up DeletePort
  DEBUG: neutronclient.shell Got an error: Request Failed: internal server 
error while processing your request.
  [root@kvalenti-controller ~(keystone_admin)]# neutron port-delete 
6bbfbc5b-93d2-4791-bb8a-ef292f04aed1
  Request Failed: internal server error while processing your request.
  #

  It's better to return "Unable to delete" or some other temporary error
  code.  This mocking  is not so painful for returning "500" to clients
  .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430394/+subscriptions

-- 

[Yahoo-eng-team] [Bug 1365476] Re: HA routers interact badly with l2pop

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365476

Title:
  HA routers interact badly with l2pop

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Since internal HA router interfaces are created on more than a single
  agent, this interacts badly with l2pop that assumes that a Neutron
  port is located in a certain place in the network. We'll need to
  report to l2pop when a HA router transitions to an active state, so
  the port location is changed.

  Patch is here:
  https://review.openstack.org/#/c/141114/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466750] Re: router-interface-add with no address causes internal error

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466750

Title:
  router-interface-add with no address causes internal error

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  for example:
  neutron net-create hoge
  neutron port-create --name hoge-port hoge
  neutron router-create hoge-router
  neutron router-interface-add hoge-router port=hoge-port

  this is a regression in commit
  I7d4e8194815e626f1cfa267f77a3f2475fdfa3d1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463665] Re: Missing requirement for PLUMgrid Neutron Plugin

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463665

Title:
  Missing requirement for PLUMgrid Neutron Plugin

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Missing networking-plumgrid in requirement for PLUMgrid Neutron Plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455042] Re: Stale metadata processes are not cleaned up on l3 agent sync

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455042

Title:
  Stale metadata processes are not cleaned up on l3 agent sync

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  L3 agent cleans up stale namespaces of deleted routers during sync,
  but metadata processes are still running (forever :0) which leads to
  waste of resources.

  Can be easily reproduced by deleting router while l3 agent is stopped.
  After starting the agent will delete namespace but not md process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483266] Re: q-svc fails to start in kilo due to "ImportError: No module named neutron_vpnaas.services.vpn.service_drivers.ipsec"

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483266

Title:
  q-svc fails to start in kilo due to "ImportError: No module named
  neutron_vpnaas.services.vpn.service_drivers.ipsec"

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  http://logs.openstack.org/70/210870/1/check/gate-grenade-dsvm-
  neutron/20f794e/logs/new/screen-q-svc.txt.gz?level=TRACE

  Looks like this is blocking kilo jobs that use neutron:

  2015-08-10 00:37:30.529 8402 ERROR neutron.common.config [-] Unable to load 
neutron from configuration file /etc/neutron/api-paste.ini.
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config Traceback (most 
recent call last):
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/opt/stack/new/neutron/neutron/common/config.py", line 227, in load_paste_app
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config app = 
deploy.loadapp("config:%s" % config_path, name=app_name)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
loadobj(APP, uri, name=name, **kw)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
context.create()
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
self.object_type.invoke(self)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config 
**context.local_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config val = 
callable(*args, **kw)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 28, in urlmap_factory
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config app = 
loader.get_app(app_name, global_conf=global_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config name=name, 
global_conf=global_conf).create()
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
self.object_type.invoke(self)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config 
**context.local_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config val = 
callable(*args, **kw)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/opt/stack/new/neutron/neutron/auth.py", line 71, in pipeline_factory
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config app = 
loader.get_app(pipeline[-1])
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config name=name, 
global_conf=global_conf).create()
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
self.object_type.invoke(self)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 146, in invoke
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
fix_call(context.object, context.global_conf, **context.local_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config val = 

[Yahoo-eng-team] [Bug 1466663] Re: radvd exits -1 intermittently in test_ha_router_process_ipv6_subnets_to_existing_port functional test

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/143

Title:
  radvd exits -1 intermittently in
  test_ha_router_process_ipv6_subnets_to_existing_port functional test

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  An example of the failure: http://logs.openstack.org/91/189391/6/check
  /check-neutron-dsvm-functional/0ba6e51/console.html

  A logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJDb21tYW5kIEFORCByYWR2ZC5jb25mIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM0NjYzNTQ3ODU5fQ==

   ERROR neutron.agent.l3.router_info Command: ['ip', 'netns', 'exec',
  'qrouter-c37cf4a8-bf31-42a1-abb8-579c583e7ea9', 'radvd', '-C',
  '/tmp/tmpidCgIT/tmplIquzu/ra/c37cf4a8-bf31-42a1-abb8-579c583e7ea9.radvd.conf',
  '-p',
  
'/tmp/tmpidCgIT/tmplIquzu/external/pids/c37cf4a8-bf31-42a1-abb8-579c583e7ea9.pid.radvd',
  '-m', 'syslog']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443607] Re: Linux Bridge: can't change the VM's bridge and tap interface MTU at Compute node.

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443607

Title:
  Linux Bridge: can't change the VM's bridge and tap interface MTU at
  Compute node.

Status in networking-cisco:
  New
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  I use DevStack to deploy OpenStack with Linux Bridge instead of OVS in
  a multi-node set up.

  I¹m testing jumbo frames and want to set MTU to 9000.

  At the Network node, the bridges and tap interfaces are created with
  MTU = 9000:

  localadmin@qa4:~/devstack$ brctl show
  bridge name   bridge id STP enabled   interfaces
  brq09047ecb-1c   8000.7c69f62c4f2f   no  eth1

tapedbcd5b1-a6
  brq319688ab-93   8000.3234d6ee3a18  no   bond0.300

tap4e230a86-cb

tapfddaf12e-85
  virbr08000.  yes

  localadmin@qa4:~/devstack$ ifconfig brq09047ecb-1c
  brq09047ecb-1c Link encap:Ethernet  HWaddr 7c:69:f6:2c:4f:2f
inet6 addr: fe80::3c79:c8ff:fe23:2fe7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:696 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:30617 (30.6 KB)  TX bytes:648 (648.0 B)

  localadmin@qa4:~/devstack$ ifconfig brq319688ab-93
  brq319688ab-93 Link encap:Ethernet  HWaddr 32:34:d6:ee:3a:18
inet6 addr: fe80::e0ec:3bff:fe09:4318/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:30 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4236 (4.2 KB)  TX bytes:648 (648.0 B)

  localadmin@qa4:~/devstack$ ifconfig tapedbcd5b1-a6
  tapedbcd5b1-a6 Link encap:Ethernet  HWaddr ae:fb:64:53:f7:2d
inet6 addr: fe80::acfb:64ff:fe53:f72d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:65 errors:0 dropped:0 overruns:0 frame:0
TX packets:947 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10223 (10.2 KB)  TX bytes:80510 (80.5 KB)

  localadmin@qa4:~/devstack$ ifconfig tap4e230a86-cb
  tap4e230a86-cb Link encap:Ethernet  HWaddr 32:34:d6:ee:3a:18
inet6 addr: fe80::3034:d6ff:feee:3a18/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:2073 errors:0 dropped:0 overruns:0 frame:0
TX packets:2229 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8878139 (8.8 MB)  TX bytes:8914532 (8.9 MB)

  localadmin@qa4:~/devstack$ ifconfig tapfddaf12e-85
  tapfddaf12e-85 Link encap:Ethernet  HWaddr d2:33:29:9b:2c:e8
inet6 addr: fe80::d033:29ff:fe9b:2ce8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:152 errors:0 dropped:0 overruns:0 frame:0
TX packets:295 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15237 (15.2 KB)  TX bytes:51849 (51.8 KB)


  The instance launched at the Compute node has interface eth0 MTU =
  9000:

  ubuntu@qa5-vm2:~$ ifconfig
  eth0  Link encap:Ethernet  HWaddr fa:16:3e:05:36:58
inet addr:10.0.0.4  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe05:3658/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:1169 errors:0 dropped:0 overruns:0 frame:0
TX packets:384 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1408206 (1.4 MB)  TX bytes:1336535 (1.3 MB)

  
  However, the associated bridge and tap interface MTU is set to default 1500:

  localadmin@qa5:~/devstack$ brctl show
  bridge name   bridge id STP enabled   
interfaces
  brq319688ab-93   8000.6805ca302558   no   bond0.300

tapa7acee8a-54
  virbr08000.  yes

  localadmin@qa5:~/devstack$ ifconfig brq319688ab-93
  

[Yahoo-eng-team] [Bug 1479558] Re: _ensure_default_security_group calls create_security_group within at db transaction

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

** Changed in: neutron/kilo
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479558

Title:
  _ensure_default_security_group calls create_security_group within at
  db transaction

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  _ensure_default_security_group calls create_security_group within at db 
transaction [1],
  Neutron plugin may choose to override create_security_group so it can invoke 
backend operations,
  handling it under an open transaction might lead to a db lock timeout.

  [1]:
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/securitygroups_db.py#n666

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454408] Re: ObectDeletedError while deleting network

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454408

Title:
  ObectDeletedError while deleting network

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  The following trace could be observed running rally tests on multi-
  server environment:

  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 476, in delete
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 671, in 
delete_network
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self._delete_ports(context, ports)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 587, in 
_delete_ports
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource port.id)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 239, in 
__get__
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource return 
self.impl.get(instance_state(instance), dict_)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 589, in 
get
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource value = 
callable_(state, passive)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py", line 424, in 
__call__
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self.manager.deferred_scalar_loader(self, toload)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 614, in 
load_scalar_attributes
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource raise 
orm_exc.ObjectDeletedError(state)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
ObjectDeletedError: Instance '' has been deleted, or 
its row is otherwise not present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490380] Re: netaddr 0.7.16 causes gate havoc

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Released

** Changed in: neutron/kilo
   Status: Fix Released => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490380

Title:
  netaddr 0.7.16 causes gate havoc

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Netaddr just released and that causes mayhem.

  https://pypi.python.org/pypi/netaddr

  An example:

  http://logs.openstack.org/03/216603/4/check/gate-neutron-
  python27/21af647/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488320] Re: neutron-vpnaas uses bad file permissions on PSK file

2015-10-11 Thread Chuck Short
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New => Fix Committed

** Changed in: neutron/kilo
Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488320

Title:
  neutron-vpnaas uses bad file permissions on PSK file

Status in neutron:
  In Progress
Status in neutron kilo series:
  Fix Committed
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Summary:

  OpenStack VPNaaS uses IPSec pre-shared keys(PSK) to secure VPN
  tunnels.  Those keys are specified by the user via the API when
  creating the VPN connection, and they are stored in the neutron
  database, then copied to the filesystem on the network node.  The PSK
  file created by the VPNaaS OpenSwan driver has perms of 644, and the
  directories in its path allow access by anyone.

  This means that if an intruder were to compromise the network node the
  pre-shared VPN keys for all tenants would be vulnerable to
  unauthorized disclosure.

  VPNaaS uses the neutron utility function replace_file() to create the
  PSK file, and replace_file sets the mode of all files it creates to
  0o644.

  This vulnerability exists in the OpenSwan ipsec driver, I have not yet
  investigated whether it exists in any of the other implementation
  drivers.

  I have developed patches to neutron and neutron_vpnaas to add an
  optional file_perm argument (with default 0o644)  to replace_file(),
  and to specify mode 0o400 when neutron-vpnaas creates the PSK file.
  This allows all other existing calls to replace_file() to maintain
  there existing behavior.

  The Gory Details:

  Here is the "ps -ef" output for the ipsec pluto process for the VPN
  endpoint on the network node:

  root 19701 1  0 01:15 ?00:00:00 /usr/lib/ipsec/pluto
  --ctlbase /var/run/neutron/ipsec/ad83280f-6993-478b-976e-
  608550093ed8/var/run/pluto --ipsecdir
  /var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc --use-
  netkey --uniqueids --nat_traversal --secretsfile
  /var/run/neutron/ipsec/ad83280f-6993-478b-976e-
  608550093ed8/etc/ipsec.secrets --virtual_private
  %v4:10.1.0.0/24,%v4:10.2.0.0/24

  The PSK is stored in /var/run/neutron/ipsec/ad83280f-6993-478b-976e-
  608550093ed8/etc/ipsec.secrets:

  /home/stack# less 
/var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc/ipsec.secrets
  # Configuration for myvpnrA
  172.16.0.2 172.16.0.3 : PSK "secret"

  Here we see the file perms:

  /home/stack# ls -l 
/var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc/ipsec.secrets
  -rw-r--r-- 1 neutron neutron 65 Aug 16 01:15 
/var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc/ipsec.secrets

  OpenSwan delivers a default secrets file
  /var/lib/openswan/ipsec.secrets.inc, and we see it has a mode that we
  would expect:

  /home/stack# ls -l /var/lib/openswan/ipsec.secrets.inc
  -rw--- 1 root root 0 Aug 15 23:51 /var/lib/openswan/ipsec.secrets.inc

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505043] [NEW] Spelling error of a word

2015-10-11 Thread JuPing
Public bug reported:

There is an incorrect spellings in the below files:
  neutron/neutron/pecan_wsgi/hooks/translation.py
#Line38:  _("An unexpected internal error occured."))

I think the word "occured" should be spelled as "occurred".

** Affects: neutron
 Importance: Undecided
 Assignee: JuPing (jup-fnst)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => JuPing (jup-fnst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505043

Title:
  Spelling error of a word

Status in neutron:
  In Progress

Bug description:
  There is an incorrect spellings in the below files:
neutron/neutron/pecan_wsgi/hooks/translation.py
  #Line38:  _("An unexpected internal error occured."))

  I think the word "occured" should be spelled as "occurred".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505046] [NEW] Error spelling of a word

2015-10-11 Thread JuPing
Public bug reported:

There is an incorrect spellings in the below files:
  glance/glance/artifacts/domain/__init__.py
#Line68: # XXX FIXME remove after using authentification

I think the word "authentification" should be spelled as
"authentication".

** Affects: glance
 Importance: Undecided
 Assignee: JuPing (jup-fnst)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => JuPing (jup-fnst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505046

Title:
  Error spelling of a word

Status in Glance:
  In Progress

Bug description:
  There is an incorrect spellings in the below files:
glance/glance/artifacts/domain/__init__.py
  #Line68: # XXX FIXME remove after using authentification

  I think the word "authentification" should be spelled as
  "authentication".

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp