[Yahoo-eng-team] [Bug 1401721] Re: Update role using LDAP backend with same name fails

2015-01-29 Thread Chuck Short
** Also affects: keystone/juno
   Importance: Undecided
   Status: New

** Changed in: keystone/juno
   Status: New = Fix Committed

** Changed in: keystone/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1401721

Title:
  Update role using LDAP backend with same name fails

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone juno series:
  Fix Committed

Bug description:
  
  When the keystone server is configured to use the LDAP backend for 
assignments and a role is updated to have the same name the operation fails 
saying that you can't create a role because another role with the same name 
already exists.

  The keystone server should just accept the request and ignore the
  change rather than failing.

  To recreate:

  0. Start with a devstack install using LDAP for assignment backend

  1. Get a token

  $ curl -i \
-H Content-Type: application/json \
-d '
  { auth: {
  identity: {
methods: [password],
password: {
  user: {
name: admin,
domain: { id: default },
password: adminpwd
  }
}
  },
  scope: {
project: {
  name: demo,
  domain: { id: default }
}
  }
}
  }' \
http://localhost:35357/v3/auth/tokens ; echo

  $ TOKEN=...

  2. List roles

  $ curl \
  -H X-Auth-Token: $TOKEN \
  http://localhost:35357/v3/roles | python -m json.tool

  $ ROLE_ID=36a9eede308d41e8a92effce2e46cc4a

  3. Update a role with the same name.

  $ curl -X PATCH \
  -H X-Auth-Token: $TOKEN \
  -H Content-Type: application/json \
  -d '{role: {name: anotherrole}}' \
  http://localhost:35357/v3/roles/$ROLE_ID

  {error: {message: Cannot duplicate name {'id':
  u'36a9eede308d41e8a92effce2e46cc4a', 'name': u'anotherrole'}, code:
  409, title: Conflict}}

  The operation should have worked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1401721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390640] Re: /auth/domains incorrectly includes domains with only user inherited roles

2015-01-29 Thread Chuck Short
** Also affects: keystone/juno
   Importance: Undecided
   Status: New

** Changed in: keystone/juno
Milestone: None = 2014.2.2

** Changed in: keystone/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1390640

Title:
  /auth/domains incorrectly includes domains with only user inherited
  roles

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone juno series:
  Fix Committed

Bug description:
  The /auth/domains API call is meant to return list of domains for
  which the user could ask for a domain-scoped token - i.e. any domain
  on which they have a role. However, the code manager/driver method it
  calls (list_domain_for_user) does not differentiate between inherited
  and non-inherited user roles - and hence might include domains for
  which the user has no effective role (a domain inherited role ONLY
  applies to the projects within that domain, not to the domain itself).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1390640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241134] Re: Using LDAP with enabled ignored, no error when attempt to change

2015-01-29 Thread Chuck Short
** Also affects: keystone/juno
   Importance: Undecided
   Status: New

** Changed in: keystone/juno
Milestone: None = 2014.2.2

** Changed in: keystone/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1241134

Title:
  Using LDAP with enabled ignored, no error when attempt to change

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone juno series:
  Fix Committed

Bug description:
  
  When the Keystone server is configured to use LDAP as the identity backend 
and 'enabled' is in user_attribute_ignore and then the user is disabled (for 
example with keystone user-update --enabled false), the server returns 
successful and the command doesn't report an error even though the user remains 
enabled.

  The server should report an error like 403 Forbidden or 501 Not
  Implemented if the user tries to change the enabled attribute and it's
  ignored.

  This would improve security since the way it is now Keystone gives the
  impression that the user has been disabled even when they have not
  been.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1241134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416000] Re: VMware: write error lost while transferring volume

2015-01-29 Thread Radoslav Gerganov
The same problem exists in Nova as we use the same approach for image
transfer:

https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/images.py#L181

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416000

Title:
  VMware: write error lost while transferring volume

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  I'm running the following command:

  cinder create --image-id a24f216f-9746-418e-97f9-aebd7fa0e25f 1

  The write side of the data transfer (a VMwareHTTPWriteFile object)
  returns an error in write() which I haven't debugged, yet. However,
  this error is never reported to the user, although it does show up in
  the logs. The effect is that the transfer sits in the 'downloading'
  state until the 7200 second timeout, when it reports the timeout.

  The reason is that the code which waits on transfer completion (in
  start_transfer) does:

  try:
  # Wait on the read and write events to signal their end
  read_event.wait()
  write_event.wait()
  except (timeout.Timeout, Exception) as exc:
  ...

  That is, it waits for the read thread to signal completion via
  read_event before checking write_event. However, because write_thread
  has died, read_thread is blocking and will never signal completion.
  You can demonstrate this by swapping the order. If you want for write
  first it will die immediately, which is what you want. However, that's
  not right either because now you're missing read errors.

  Ideally this code needs to be able to notice an error at either end
  and stop immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1416000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416015] [NEW] Add 'user_id' to REST os-simple-tenant-usage output

2015-01-29 Thread Steve Meier
Public bug reported:

Hi,

Request to add 'user_id' to os-simple-tenant-usage REST output. Purpose
is to give tenants a bit more auditing capability as to which user
created and terminated instances. If there is not a better way to
accomplish this, I believe the patch below will do the trick.

Thanks,
-Steve

--- nova/api/openstack/compute/contrib/simple_tenant_usage.py   2015-01-29 
02:05:53.322814055 +
+++ nova/api/openstack/compute/contrib/simple_tenant_usage.py.patch 
2015-01-29 02:02:04.136577506 +
@@ -164,6 +164,7 @@ class SimpleTenantUsageController(object
 info['vcpus'] = instance.vcpus
 
 info['tenant_id'] = instance.project_id
+info['user_id'] = instance.user_id
 
 # NOTE(mriedem): We need to normalize the start/end times back
 # to timezone-naive so the response doesn't change after the

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416015

Title:
  Add 'user_id' to REST os-simple-tenant-usage output

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi,

  Request to add 'user_id' to os-simple-tenant-usage REST output.
  Purpose is to give tenants a bit more auditing capability as to which
  user created and terminated instances. If there is not a better way to
  accomplish this, I believe the patch below will do the trick.

  Thanks,
  -Steve

  --- nova/api/openstack/compute/contrib/simple_tenant_usage.py 2015-01-29 
02:05:53.322814055 +
  +++ nova/api/openstack/compute/contrib/simple_tenant_usage.py.patch   
2015-01-29 02:02:04.136577506 +
  @@ -164,6 +164,7 @@ class SimpleTenantUsageController(object
   info['vcpus'] = instance.vcpus
   
   info['tenant_id'] = instance.project_id
  +info['user_id'] = instance.user_id
   
   # NOTE(mriedem): We need to normalize the start/end times back
   # to timezone-naive so the response doesn't change after the

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415768] [NEW] the pci deivce assigned to instance is inconsistent with DB record when restarting nova-compute

2015-01-29 Thread Rui Chen
Public bug reported:

After restarting nova-compute process, I found that the pci device
assigned to instance in libvirt.xml was different with the record in
'pci_devices' DB table.

Every time nova-compute was restarted, pci_tracker.allocations was reset
to empty dict, it didn't contain the pci devices had been allocated to
instances, so some pci devices would be reallocated to the instances,
and record these pci into DB, maybe they was inconsistent with the
libvirt.xml.

IOW, nova-compute would reallocated the pci device for the instance with
pci request when restarting.

See details:
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/resource_tracker.py#n347

This is a probabilistic problem, not always can be reproduced. If the
instance have a lot of pci devices, it happen more.

Face this bug in kilo master.

** Affects: nova
 Importance: Undecided
 Assignee: Rui Chen (kiwik-chenrui)
 Status: New


** Tags: compute

** Changed in: nova
 Assignee: (unassigned) = Rui Chen (kiwik-chenrui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415768

Title:
  the pci deivce assigned to instance is inconsistent with DB record
  when restarting nova-compute

Status in OpenStack Compute (Nova):
  New

Bug description:
  After restarting nova-compute process, I found that the pci device
  assigned to instance in libvirt.xml was different with the record in
  'pci_devices' DB table.

  Every time nova-compute was restarted, pci_tracker.allocations was
  reset to empty dict, it didn't contain the pci devices had been
  allocated to instances, so some pci devices would be reallocated to
  the instances, and record these pci into DB, maybe they was
  inconsistent with the libvirt.xml.

  IOW, nova-compute would reallocated the pci device for the instance
  with pci request when restarting.

  See details:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/resource_tracker.py#n347

  This is a probabilistic problem, not always can be reproduced. If the
  instance have a lot of pci devices, it happen more.

  Face this bug in kilo master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415612] Re: table checkboxes are not vertically aligned properly

2015-01-29 Thread Rob Cresswell
This is a duplicate of 1415613 - not sure how that happened.

** Changed in: horizon
   Status: New = Invalid

** Changed in: horizon
 Assignee: Wu Wenxiang (wu-wenxiang) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415612

Title:
  table checkboxes are not vertically aligned properly

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The checkboxes in the in table rows are not vertically aligned.

  Tested on Ubuntu 14.04, Firefox 35.0.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415612/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415775] [NEW] there is no ram hours in admin overview page

2015-01-29 Thread tinytmy
Public bug reported:

In the admin overview page, there has cpu-hours, disk-hours,
but has no ram-hours. I think it also need.

** Affects: horizon
 Importance: Undecided
 Assignee: tinytmy (tangmeiyan77)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = tinytmy (tangmeiyan77)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415775

Title:
  there is no ram hours in admin overview page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the admin overview page, there has cpu-hours, disk-hours,
  but has no ram-hours. I think it also need.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415778] [NEW] _local_delete results inconsistent volume state in DB

2015-01-29 Thread Bin Zhou @ZTE
Public bug reported:

when nova-compute service is down, delete instance will call _local_delete 
in nova-api service, which will delete instance from DB,ternminate 
connection,detach volume and destroy bdm.
However,we set connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'} 
while call ternminate connection, which result an exception, leading the volume 
status still in used, attached to the instance, but the instance and bdm are 
deleted in nova db.  all of this  make DB inconsistent state, bdm is deleted in 
nova, but volume still in use from cinder.
Because the nova compute service is down, we can't get the correct 
connector of host. If we record the connector in bdm while attaching volume, 
the connector can be get from bdm when local_delete, which will lead success of 
,ternminate connection,detach volume and so on.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415778

Title:
  _local_delete results  inconsistent volume  state in DB

Status in OpenStack Compute (Nova):
  New

Bug description:
  when nova-compute service is down, delete instance will call 
_local_delete in nova-api service, which will delete instance from 
DB,ternminate connection,detach volume and destroy bdm.
  However,we set connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'} 
while call ternminate connection, which result an exception, leading the volume 
status still in used, attached to the instance, but the instance and bdm are 
deleted in nova db.  all of this  make DB inconsistent state, bdm is deleted in 
nova, but volume still in use from cinder.
  Because the nova compute service is down, we can't get the correct 
connector of host. If we record the connector in bdm while attaching volume, 
the connector can be get from bdm when local_delete, which will lead success of 
,ternminate connection,detach volume and so on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415825] [NEW] no way to know the flavor id for non admin users

2015-01-29 Thread Masco Kaliyamoorthy
Public bug reported:

non admin users not able to get the flavor id in horizon.

in instance table filter, we can filter based on flavor id but there is
no way to get the flavor id for non admin users.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415825

Title:
  no way to know the flavor id for non admin users

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  non admin users not able to get the flavor id in horizon.

  in instance table filter, we can filter based on flavor id but there
  is no way to get the flavor id for non admin users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415089] Re: Integration tests - implement object store page

2015-01-29 Thread Martin Pavlásek
This should be blueprint, not a bug. Sorry for doing mess...

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415089

Title:
  Integration tests - implement object store page

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Horizon contains page that allows to user manager object containers (swift 
service) and there is no integration test so far.
  At the beginning I'd like to implement just create and remove actions of new 
container. Once it would be done, scope of tests can be easily extended.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415807] [NEW] Instance Count does does not update Project Limits when launching new VM

2015-01-29 Thread Miroslav Suchý
Public bug reported:

When you are launching new VM and you change Flavour, then Project
limits (the box on right size) updates on the fly. But if you update
Instance Count (which should affect Project Limits) then Project Limits
is not updated at all.

Version:
OpenStack Icehouse, RDO on RHEL7

Steps to Reproduce:
1. Opend Dashboard - Images - Click Launch button on some instance
2.  Select flavour x1.large
3. Notice that green bars in Project Limits grow E.g. vcpu grow by 8.
4. Increase instance count to 4

Actual result:
  green bars in project limits does not change

Expected result:
   green bars in project limits should grow. E.g. vcpu shoudl grow from 8 to 
32

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415807

Title:
  Instance Count does does not update Project Limits when launching new
  VM

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When you are launching new VM and you change Flavour, then Project
  limits (the box on right size) updates on the fly. But if you update
  Instance Count (which should affect Project Limits) then Project
  Limits is not updated at all.

  Version:
  OpenStack Icehouse, RDO on RHEL7

  Steps to Reproduce:
  1. Opend Dashboard - Images - Click Launch button on some instance
  2.  Select flavour x1.large
  3. Notice that green bars in Project Limits grow E.g. vcpu grow by 8.
  4. Increase instance count to 4

  Actual result:
green bars in project limits does not change

  Expected result:
 green bars in project limits should grow. E.g. vcpu shoudl grow from 8 
to 32

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405146] Re: cannot create instance if security groups are disabled

2015-01-29 Thread Numan Siddique
** Changed in: nova
   Status: New = Invalid

** Changed in: nova
 Assignee: Numan Siddique (numansiddique) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405146

Title:
  cannot create instance if security groups are disabled

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  2014.2.1 deployed by packstack on CentOS 7.

  I completely disabled security groups in both neutron (ml2 plugin) and
  nova:

  * /etc/neutron/plugin.ini
  enable_security_group = False

  * /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
  firewall_driver=neutron.agent.firewall.NoopFirewallDriver

  * /etc/nova/nova.conf
  security_group_api=neutron
  firewall_driver=nova.virt.firewall.NoopFirewallDriver

  [root@juno1 ~(keystone_admin)]# nova boot --flavor m1.small --image
  fedora-21 --nic net-id=5d37cd0b-7ad4-439e-a0f9-a4a430ff696b fedora-
  test

  From the nova-compute log instance creation fails with:

  2014-12-23 14:21:26.747 13009 ERROR nova.compute.manager [-] [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] Instance failed to spawn
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] Traceback (most recent call last):
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2243, in 
_build_resources
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] yield resources
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2113, in 
_build_and_run_ins
  tance
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] block_device_info=block_device_info)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2615, in 
spawn
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] admin_pass=admin_password)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 3096, in 
_create_image
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] instance, network_info, admin_pass, 
files, suffix)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2893, in 
_inject_data
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] network_info, 
libvirt_virt_type=CONF.libvirt.virt_type)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/netutils.py, line 87, in 
get_injected_network_t
  emplate
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] if not (network_info and template):
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/network/model.py, line 463, in __len__
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] return self._sync_wrapper(fn, *args, 
**kwargs)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/network/model.py, line 450, in 
_sync_wrapper
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] self.wait()
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/network/model.py, line 482, in wait
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] self[:] = self._gt.wait()
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 173, in wait
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] return self._exit_event.wait()
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 

[Yahoo-eng-team] [Bug 1415835] [NEW] VM boot is broken with providing port-id from Neutron

2015-01-29 Thread Valeriy Ponomaryov
Public bug reported:

Commit https://review.openstack.org/#/c/124059/ has introduced bug,
where Nova can not boot VM.

Steps to reproduce:

1) Create port in Neutron
2) Boot Vm without security group, but with port:

nova --debug boot tt --image=25a15f92-6bbe-43d6-8da5-b015966a4bd1
--flavor=100 --nic port-id=01e02c22-6ea3-4fe6-8cfe-407a06b634a0

...

REQ: curl -i
'http://172.18.198.52:8774/v2/35b86f321c03497fbfa1c0fdf98a3426/servers'
-X POST -H Accept: application/json -H Content-Type:
application/json -H User-Agent: python-novaclient -H X-Auth-Project-
Id: demo -H X-Auth-Token:
{SHA1}696ac31a35c12934a64485459b0a95a48a9ab4dd -d '{server: {name:
tt, imageRef: 25a15f92-6bbe-43d6-8da5-b015966a4bd1, flavorRef:
100, max_count: 1, min_count: 1, networks: [{port:
01e02c22-6ea3-4fe6-8cfe-407a06b634a0}]}}'

...

Trace as a result:

2015-01-29 12:14:03.338 ERROR nova.compute.manager [-] Instance failed network 
setup after 1 attempt(s)
2015-01-29 12:14:03.338 TRACE nova.compute.manager Traceback (most recent call 
last):
2015-01-29 12:14:03.338 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/compute/manager.py, line 1677, in _allocate_network_async
2015-01-29 12:14:03.338 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2015-01-29 12:14:03.338 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 457, in 
allocate_for_instance
2015-01-29 12:14:03.338 TRACE nova.compute.manager raise 
exception.SecurityGroupNotAllowedTogetherWithPort()
2015-01-29 12:14:03.338 TRACE nova.compute.manager 
SecurityGroupNotAllowedTogetherWithPort: It's not allowed to specify security 
groups if port_id is provided on instance boot. Neutron should be used to 
configure security groups on port.
2015-01-29 12:14:03.338 TRACE nova.compute.manager
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line 
115, in wait
listener.cb(fileno)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
214, in main
result = function(*args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1677, in 
_allocate_network_async
dhcp_options=dhcp_options)
  File /opt/stack/nova/nova/network/neutronv2/api.py, line 457, in 
allocate_for_instance
raise exception.SecurityGroupNotAllowedTogetherWithPort()
SecurityGroupNotAllowedTogetherWithPort: It's not allowed to specify security 
groups if port_id is provided on instance boot. Neutron should be used to 
configure security groups on port.
Removing descriptor: 19
2015-01-29 12:14:03.529 DEB

2015-01-29 12:14:03.710 INFO nova.virt.libvirt.driver [-] [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] Using config drive
2015-01-29 12:14:03.763 ERROR nova.compute.manager [-] [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] Instance failed to spawn
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] Traceback (most recent call last):
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/compute/manager.py, line 2303, in _build_resources
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] yield resources
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/compute/manager.py, line 2173, in _build_and_run_instance
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] flavor=flavor)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2309, in spawn
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] admin_pass=admin_password)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2783, in _create_image
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] content=files, extra_md=extra_md, 
network_info=network_info)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/api/metadata/base.py, line 159, in __init__
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] 
ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/api/ec2/ec2utils.py, line 152, in 
get_ip_info_for_instance_from_nw_info
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] fixed_ips = nw_info.fixed_ips()
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1415864] [NEW] heatclient traces in tests

2015-01-29 Thread Matthias Runge
Public bug reported:

...DEBUG:heatclient.common.http:curl
 -i -X GET -H 'X-Auth-Token: {SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 
'Content-Type: application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
..DEBUG:heatclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
{SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
.DEBUG:heatclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
{SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21


In github checkout from 2015-01-29

This must have been introduced recently.

** Affects: horizon
 Importance: High
 Status: New

** Changed in: horizon
Milestone: None = kilo-2

** Changed in: horizon
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415864

Title:
  heatclient traces in tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
...DEBUG:heatclient.common.http:curl
 -i -X GET -H 'X-Auth-Token: {SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 
'Content-Type: application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
  ..DEBUG:heatclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
{SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
  .DEBUG:heatclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
{SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
  


  In github checkout from 2015-01-29

  This must have been introduced recently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346494] Re: l3 agent gw port missing vlan tag for vlan provider network

2015-01-29 Thread James Polley
You're right, I misread the history.

I've been fighting other bugs, but I'm now up to confirming that
external_network_bridge= does work for the scenario rob had in mind.
I'll leave this invalid for now and re-open if anything comes up.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346494

Title:
  l3 agent gw port missing vlan tag for vlan provider network

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi, I have a provider network with my floating NAT range on it and a vlan 
segmentation id:
  neutron net-show ext-net
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| f8ea424f-fcbe-4d57-9f17-5c576bf56e60 |
  | name  | ext-net  |
  | provider:network_type | vlan |
  | provider:physical_network | datacentre   |
  | provider:segmentation_id  | 25   |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 391829e1-afc5-4280-9cd9-75f554315e82 |
  | tenant_id | e23f57e1d6c54398a68354adf522a36d |
  +---+--+

  My ovs agent config:

  cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 
  [DATABASE]
  sql_connection = mysql://.@localhost/ovs_neutron?charset=utf8

  reconnect_interval = 2

  [OVS]
  bridge_mappings = datacentre:br-ex
  network_vlan_ranges = datacentre

  tenant_network_type = gre
  tunnel_id_ranges = 1:1000
  enable_tunneling = True
  integration_bridge = br-int
  tunnel_bridge = br-tun
  local_ip = 10.10.16.151

  
  [AGENT]
  polling_interval = 2

  [SECURITYGROUP]
  firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  root@ci-overcloud-controller0-ydt5on7wojsb:~# 

  But, the thing is, the port created in ovs is missing the tag:
  Bridge br-ex
  Port qg-d8c27507-14
  Interface qg-d8c27507-14
  type: internal

  And we (As expected) are seeing tagged frames in tcpdump:
  19:37:16.107288 20:fd:f1:b6:f5:16  ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 68: vlan 25, p 0, ethertype ARP, Request who-has 138.35.77.67 
tell 138.35.77.1, length 50

  rather than untagged frames for the vlan 25.

  Running ovs-vsctl set port qg-d8c27507-14 tag=25 makes things work,
  but the agent should do this, no?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399768] Re: migration ofr endpoint_filter failes due to foreign key constraint

2015-01-29 Thread Dave Chen
Hit the same issue with follow error message, will look into the issue
and check whether this is caused by the old .pyc as ayong methioned.


/usr/local/bin/keystone-manage db_sync --extension endpoint_filter

31365 TRACE keystone Traceback (most recent call last):
31365 TRACE keystone   File /usr/local/bin/keystone-manage, line 6, in 
module
31365 TRACE keystone exec(compile(open(__file__).read(), __file__, 'exec'))
31365 TRACE keystone   File /opt/stack/keystone/bin/keystone-manage, line 44, 
in module
31365 TRACE keystone cli.main(argv=sys.argv, config_files=config_files)
31365 TRACE keystone   File /opt/stack/keystone/keystone/cli.py, line 311, in 
main
31365 TRACE keystone CONF.command.cmd_class.main()
31365 TRACE keystone   File /opt/stack/keystone/keystone/cli.py, line 74, in 
main
31365 TRACE keystone migration_helpers.sync_database_to_version(extension, 
version)
31365 TRACE keystone   File 
/opt/stack/keystone/keystone/common/sql/migration_helpers.py, line 211, in 
sync_database_to_version
31365 TRACE keystone _sync_extension_repo(extension, version)
31365 TRACE keystone   File 
/opt/stack/keystone/keystone/common/sql/migration_helpers.py, line 199, in 
_sync_extension_repo
31365 TRACE keystone init_version=init_version)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/migration.py, line 
79, in db_sync
31365 TRACE keystone return versioning_api.upgrade(engine, repository, 
version)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py, line 186, 
in upgrade
31365 TRACE keystone return _migrate(url, repository, version, 
upgrade=True, err=err, **opts)
31365 TRACE keystone   File string, line 2, in _migrate
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py, 
line 160, in with_engine
31365 TRACE keystone return f(*a, **kw)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py, line 366, 
in _migrate
31365 TRACE keystone schema.runchange(ver, change, changeset.step)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 93, 
in runchange
31365 TRACE keystone change.run(self.engine, step)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/script/py.py, line 
148, in run
31365 TRACE keystone script_func(engine)
31365 TRACE keystone   File 
/opt/stack/keystone/keystone/contrib/endpoint_filter/migrate_repo/versions/002_add_endpoint_groups.py,
 line 41, in upgrade
31365 TRACE keystone project_endpoint_group_table.create(migrate_engine, 
checkfirst=True)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py, line 707, in 
create
31365 TRACE keystone checkfirst=checkfirst)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1616, 
in _run_visitor
31365 TRACE keystone conn._run_visitor(visitorcallable, element, **kwargs)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1245, 
in _run_visitor
31365 TRACE keystone **kwargs).traverse_single(element)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, line 120, 
in traverse_single
31365 TRACE keystone return meth(obj, **kw)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py, line 732, in 
visit_table
31365 TRACE keystone self.connection.execute(CreateTable(table))
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 729, 
in execute
31365 TRACE keystone return meth(self, multiparams, params)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py, line 69, in 
_execute_on_connection
31365 TRACE keystone return connection._execute_ddl(self, multiparams, 
params)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 783, 
in _execute_ddl
31365 TRACE keystone compiled
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 958, 
in _execute_context
31365 TRACE keystone context)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1155, 
in _handle_dbapi_exception
31365 TRACE keystone util.raise_from_cause(newraise, exc_info)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 199, 
in raise_from_cause
31365 TRACE keystone reraise(type(exception), exception, tb=exc_tb)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 951, 
in _execute_context
31365 TRACE keystone context)
31365 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
436, in 

[Yahoo-eng-team] [Bug 1415959] [NEW] Role cache details are actually using the assignment values

2015-01-29 Thread Henry Nash
Public bug reported:

When we split the role manager into a separate backend inside the
assignment component, we also gave the role manager its own cache config
values.  However, the actual code still uses the assignment cache
values.

** Affects: keystone
 Importance: High
 Assignee: Henry Nash (henry-nash)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1415959

Title:
  Role cache details are actually using the assignment values

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When we split the role manager into a separate backend inside the
  assignment component, we also gave the role manager its own cache
  config values.  However, the actual code still uses the assignment
  cache values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1415959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415891] [NEW] neutron-vpnaas test cases are failing

2015-01-29 Thread Numan Siddique
Public bug reported:

neutron-vpnaas unit test cases are failing because of this commit.
https://github.com/openstack/neutron/commit/47ddd2cc03528d9bd66a18d8fcc74ae26aa83497

The test cases needs to be updated accordingly

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415891

Title:
  neutron-vpnaas  test cases are failing

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  neutron-vpnaas unit test cases are failing because of this commit.
  
https://github.com/openstack/neutron/commit/47ddd2cc03528d9bd66a18d8fcc74ae26aa83497

  The test cases needs to be updated accordingly

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1415891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404801] Re: Unshelve instance not working if instance is boot from volume

2015-01-29 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided = Medium

** Changed in: nova/juno
 Assignee: (unassigned) = Pranali Deore (pranali-deore)

** Changed in: nova/juno
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404801

Title:
  Unshelve instance not working if instance is boot from volume

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  In Progress

Bug description:
  If instance is booted from volume, then shelving the instance sets the
  status as SHELVED_OFFLOADED, instance files are getting deleted
  properly from the base path. When you call the unshelve instance, it
  fails on the conductor with error Unshelve attempted but the image_id
  is not provided, and instance goes in to error state.

  Steps to reproduce:
  ---

  1. Log in to Horizon, create a new volume.
  2. Create an Instance using newly created volume.
  3. Verify instance is in active state.
  $ source devstack/openrc demo demo
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  4. Shelve the instance
  $ nova shelve instance-uuid

  5. Verify the status is SHELVED_OFFLOADED.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | SHELVED_OFFLOADED | - 
 | Shutdown| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. Unshelve the instance.
  $ nova unshelve instance-uuid

  Following stack-trace logged in nova-conductor

  2014-12-19 02:55:59.634 ERROR nova.conductor.manager 
[req-a071fbc9-1c23-4e7a-8adf-7b3d0951aadf demo demo] [instance: 
dae3a13b-6aa8-4794-93cd-5ab7bf90f604] Unshelve attempted but the image_id is 
not provided
  2014-12-19 02:55:59.647 ERROR oslo.messaging.rpc.dispatcher 
[req-a071fbc9-1c23-4e7a-8adf-7b3d0951aadf demo demo] Exception during message 
handling: Error during unshelve instance dae3a13b-6aa8-4794-93cd-5ab7bf90f604: 
Unshelve attempted but the image_id is not provided
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
137, in _dispatch_and_reply
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
180, in _dispatch
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
126, in _do_dispatch
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/conductor/manager.py, line 727, in unshelve_instance
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher 
instance_id=instance.uuid, reason=reason)
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher 
UnshelveException: Error during unshelve instance 
dae3a13b-6aa8-4794-93cd-5ab7bf90f604: Unshelve attempted but the image_id is 
not provided
  2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher

  7. Instance goes into error state.
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ERROR  | unshelving | 
Shutdown| private=10.0.0.3 

[Yahoo-eng-team] [Bug 1416004] [NEW] horizon missing static dir for angular cookies

2015-01-29 Thread Eric Peterson
Public bug reported:

I have found issues where the angular cookies js file is not being
found, unless I include this in my settings file:

import xstatic.pkg.angular_cookies
...

STATICFILES_DIRS.append(('horizon/lib/angular',
xstatic.main.XStatic(xstatic.pkg.angular_cookies).base_dir))

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  I have found issues where the angular cookies js file is not being
  found, unless I include this in the static settings file:
  
  import xstatic.pkg.angular_cookies
  ...
  
  STATICFILES_DIRS.append(('horizon/lib/angular',
-  
xstatic.main.XStatic(xstatic.pkg.angular_cookies).base_dir))
+ xstatic.main.XStatic(xstatic.pkg.angular_cookies).base_dir))

** Description changed:

  I have found issues where the angular cookies js file is not being
- found, unless I include this in the static settings file:
+ found, unless I include this in my settings file:
  
  import xstatic.pkg.angular_cookies
  ...
  
  STATICFILES_DIRS.append(('horizon/lib/angular',
  xstatic.main.XStatic(xstatic.pkg.angular_cookies).base_dir))

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416004

Title:
  horizon missing static dir for angular cookies

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I have found issues where the angular cookies js file is not being
  found, unless I include this in my settings file:

  import xstatic.pkg.angular_cookies
  ...

  STATICFILES_DIRS.append(('horizon/lib/angular',
  xstatic.main.XStatic(xstatic.pkg.angular_cookies).base_dir))

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362075] Re: Live migration fails on Hyper-V when boot from volume is used

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362075

Title:
  Live migration fails on Hyper-V when boot from volume is used

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Live migration fails on Hyper-V when boot from volume is used with
  CoW, as the target host tries to cache the root disk image in
  pre_live_migration, but in this case the image_ref is empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408024] Re: Wrong processing BadRequest while adding security group rule

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408024

Title:
  Wrong processing BadRequest while adding security group rule

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  There is a couple of ways to obtain BadRequest from Neutron in neutron
  driver of security groups. As an example try to add below security
  group rule:

  nova --debug secgroup-add-rule default icmp -1 255 0.0.0.0/0

  Attempt fails and Neutron raises BadRequest. But neutron driver
  doesn't process exception with code 400 and reraises it again as
  NeutronClientException[1]. So this Exception is only handled [2],
  where code of this exceprion isn't processed correctly (because
  Neutron and Nova have different names for attribute with exceprion
  code), so nova throws internal server error instead of BadRequest:

  ERROR (ClientException): The server has either erred or is incapable
  of performing the requested operation. (HTTP 500) (Request-ID: req-
  4775128d-5ef0-4863-b96f-56515c967fb4)

  [1] - 
https://github.com/openstack/nova/blob/master/nova/network/security_group/neutron_driver.py#L217
  [2] - 
https://github.com/openstack/nova/blob/master/nova/api/openstack/__init__.py#L97

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380965] Re: Floating IPs don't have instance ids in Juno

2015-01-29 Thread Chuck Short
** Changed in: nova
Milestone: None = 2014.2.2

** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380965

Title:
  Floating IPs don't have instance ids in Juno

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  In Icehouse, when I associate a floating IP with an instance, the Nova
  API for listing floating IPs (/os-floating-ips) gives you the instance
  ID of the associated instance:

{floating_ips: [{instance_id: 82c2aff3-511b-
  4e9e-8353-79da86281dfd, ip: 10.1.151.1, fixed_ip: 10.10.0.4,
  id: 8113e71b-7194-447a-ad37-98182f7be80a, pool: ext_net}]}

  
  With latest rc for Juno, the instance_id always seem to be null:

{floating_ips: [{instance_id: null, ip: 10.96.201.0,
  fixed_ip: 10.10.0.8, id: 00ffd9a0-5afe-4221-8913-7e275da7f82a,
  pool: ext_net}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407024] Re: pep8 H302 failing on stable/juno with latest hacking

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407024

Title:
  pep8 H302 failing on stable/juno with latest hacking

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  mriedem@ubuntu:~/git/nova$ tox -r -e pep8
  pep8 recreate: /home/mriedem/git/nova/.tox/pep8
  pep8 installdeps: -r/home/mriedem/git/nova/requirements.txt, 
-r/home/mriedem/git/nova/test-requirements.txt
  pep8 develop-inst: /home/mriedem/git/nova
  pep8 runtests: PYTHONHASHSEED='0'
  pep8 runtests: commands[0] | flake8
  ./nova/tests/compute/test_resources.py:30:1: H302  import only modules.'from 
nova.tests.fake_instance import fake_instance_obj' does not import a module
  ./nova/tests/compute/test_rpcapi.py:31:1: H302  import only modules.'from 
nova.tests.fake_instance import fake_instance_obj' does not import a module
  ERROR: InvocationError: '/home/mriedem/git/nova/.tox/pep8/bin/flake8'
  
___
 summary 
___
  ERROR:   pep8: commands failed

  
  I'm not sure what changed, I'm assuming a new version of hacking was released 
that hits problems in stable/juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389102] Re: Instance error message truncation error in non-English locale

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389102

Title:
  Instance error message truncation error in non-English locale

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  1. Change OpenStack server to Russian locale, LANG=ru_RU.utf8

  2. Set firefox client browser locale to russian(ru)

  3. Trigger an operational failure that has a message that tries to get
  written to a Nova instance fault

  
  Stacktrace

  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 302, in 
decorated_function
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher pass
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 82, 
in __exit__
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 288, in 
decorated_function
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 330, in 
decorated_function
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/compute/utils.py, line 94, in 
add_instance_fault_from_exc
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
fault_obj.create()
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/objects/base.py, line 204, in wrapper
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher return 
fn(self, ctxt, *args, **kwargs)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/objects/instance_fault.py, line 75, in 
create
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
db_fault = db.instance_fault_create(context, values)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/db/api.py, line 1816, in 
instance_fault_create
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher return 
IMPL.instance_fault_create(context, values)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py, line 5423, in 
instance_fault_create
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
fault_ref.save()
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/models.py, line 62, in 
save
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
super(NovaBase, self).save(session=session)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/models.py, line 48, in 
save
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
session.flush()
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py, line 1818, in 
flush
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
self._flush(objects)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py, line 1936, in 
_flush
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
transaction.rollback(_capture_exception=True)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/util/langhelpers.py, line 58, 
in __exit__
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
compat.reraise(exc_type, exc_value, exc_tb)
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py, line 1900, in 
_flush
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher 
flush_context.execute()
  2014-10-30 05:55:34.933 18371 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/unitofwork.py, line 372, in 
execute
  2014-10-30 05:55:34.933 18371 

[Yahoo-eng-team] [Bug 1407736] Re: python unit test jobs failing due to subunit log being too big

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407736

Title:
  python unit test jobs failing due to subunit log being too big

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New
Status in Database schema migration for SQLAlchemy:
  Fix Committed

Bug description:
  http://logs.openstack.org/60/144760/1/check/gate-nova-
  python26/6eb86b3/console.html#_2015-01-05_10_20_01_178

  2015-01-05 10:20:01.178 | + [[ 72860 -gt 5 ]]
  2015-01-05 10:20:01.178 | + echo
  2015-01-05 10:20:01.178 | 
  2015-01-05 10:20:01.178 | + echo 'sub_unit.log was  50 MB of uncompressed 
data!!!'
  2015-01-05 10:20:01.178 | sub_unit.log was  50 MB of uncompressed data!!!
  2015-01-05 10:20:01.179 | + echo 'Something is causing tests for this project 
to log significant amounts'
  2015-01-05 10:20:01.179 | Something is causing tests for this project to log 
significant amounts
  2015-01-05 10:20:01.179 | + echo 'of data. This may be writers to python 
logging, stdout, or stderr.'
  2015-01-05 10:20:01.179 | of data. This may be writers to python logging, 
stdout, or stderr.
  2015-01-05 10:20:01.179 | + echo 'Failing this test as a result'
  2015-01-05 10:20:01.179 | Failing this test as a result
  2015-01-05 10:20:01.179 | + echo

  Looks like the subunit log is around 73 MB, this could be due to the
  new pip because I'm seeing a ton of these:

  DeprecationWarning: `require` parameter is deprecated. Use
  EntryPoint._load instead.

  The latest pip was released on 1/3/15:

  https://pypi.python.org/pypi/pip/6.0.6

  That's also when those warnings showed up:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGVwcmVjYXRpb25XYXJuaW5nOiBgcmVxdWlyZWAgcGFyYW1ldGVyIGlzIGRlcHJlY2F0ZWQuIFVzZSBFbnRyeVBvaW50Ll9sb2FkIGluc3RlYWQuXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgYW5kIHByb2plY3Q6XCJvcGVuc3RhY2svbm92YVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIwNDc2OTk3NTI3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407685] Re: New eventlet library breaks nova-manage

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407685

Title:
  New eventlet library breaks nova-manage

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  This only affects stable/juno and stable/icehouse, which still use the
  deprecated eventlet.util module:

  ~# nova-manage service list
  2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] Could not load 
'file': cannot import name util
  2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] cannot import 
name util
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension Traceback (most 
recent call last):
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py,
 line 162, in _load_plugins
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension 
verify_requirements,
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py,
 line 178, in _load_one_plugin
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension plugin = 
ep.load(require=verify_requirements)
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py,
 line 2306, in load
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension return 
self._load()
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py,
 line 2309, in _load
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/image/download/file.py,
 line 23, in module
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension import 
nova.virt.libvirt.utils as lv_utils
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py,
 line 15, in module
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from 
nova.virt.libvirt import driver
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py,
 line 59, in module
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from eventlet 
import util as eventlet_util
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension ImportError: cannot 
import name util
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400269] Re: unable to destroy bare-metal instance when flavor is deleted

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400269

Title:
  unable to destroy bare-metal instance when flavor is deleted

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  Get error if delete flavor before delete instance.
  This is caused by ironic driver in nova.
  We have below code in _cleanup_deploy function in nova/virt/ironic/driver.py
  if flavor is None:
  # TODO(mrda): It would be better to use instance.get_flavor() here
  # but right now that doesn't include extra_specs which are 
required
  flavor = objects.Flavor.get_by_id(context,
    instance['instance_type_id'])
  So if the flavor is deleted before we destroy bare metal node, we get 
FlavorNotFoud exception without handling.

  
  At same time, I found the flavor is used to clean deploy ramdisk/kernel in 
driver_info, which was plan to remove in Kilo, are we ready for that?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388386] Re: libvirt: boot instance with utf-8 name results in UnicodeDecodeError

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388386

Title:
  libvirt: boot instance with utf-8 name results in UnicodeDecodeError

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  With the libvirt driver and Juno 2014.2 code, try to boot a server via
  Horizon with name ABC一丁七ÇàâアイウДфэبتثअइउ€¥噂ソ十豹竹敷 results in:

  http://paste.openstack.org/show/128060/

  This is new in Juno but was a latent issue since Icehouse, the Juno
  change was:

  
https://github.com/openstack/nova/commit/60c90f73261efb8c73ecc02152307c81265cab13

  The err variable is an i18n Message object and when we try to put the
  domain.XMLDesc(0) into the unicode _LE message object string it blows
  up in oslo.i18n because the encoding doesn't match.

  The fix is to wrap domain.XMLDesc(0) in
  oslo.utils.encodeutils.safe_decode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380965] Re: Floating IPs don't have instance ids in Juno

2015-01-29 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Fix Released = Fix Committed

** Changed in: nova/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380965

Title:
  Floating IPs don't have instance ids in Juno

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Committed

Bug description:
  In Icehouse, when I associate a floating IP with an instance, the Nova
  API for listing floating IPs (/os-floating-ips) gives you the instance
  ID of the associated instance:

{floating_ips: [{instance_id: 82c2aff3-511b-
  4e9e-8353-79da86281dfd, ip: 10.1.151.1, fixed_ip: 10.10.0.4,
  id: 8113e71b-7194-447a-ad37-98182f7be80a, pool: ext_net}]}

  
  With latest rc for Juno, the instance_id always seem to be null:

{floating_ips: [{instance_id: null, ip: 10.96.201.0,
  fixed_ip: 10.10.0.8, id: 00ffd9a0-5afe-4221-8913-7e275da7f82a,
  pool: ext_net}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294939] Re: Add a fixed IP to an instance failed

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294939

Title:
  Add a fixed IP to an instance failed

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  +--+---+-+
  | ID   | Label | CIDR|
  +--+---+-+
  | be95de64-a2aa-42de-a522-37802cdbe133 | vmnet | 10.0.0.0/24 |
  | 0fd904f5-1870-4066-8213-94038b49be2e | abc   | 10.1.0.0/24 |
  | 7cd88ead-fd42-4441-9182-72b3164c108d | abd   | 10.2.0.0/24 |
  +--+---+-+

  nova  add-fixed-ip test15 0fd904f5-1870-4066-8213-94038b49be2e

  failed with following logs

  
  2014-03-19 03:29:30.546 7822 ERROR nova.openstack.common.rpc.amqp 
[req-fd087223-3646-4fed-b0f6-5a5cf50828eb d6779a827003465db2d3c52fe135d926 
45210fba73d24dd681dc5c292c6b1e7f] Exception during message handling
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp **args)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/network/manager.py, line 772, in 
add_fixed_ip_to_instance
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
self._allocate_fixed_ips(context, instance_id, host, [network])
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/network/manager.py, line 214, in 
_allocate_fixed_ips
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
vpn=vpn, address=address)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/network/manager.py, line 881, in 
allocate_fixed_ip
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
self.quotas.rollback(context, reservations)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/network/manager.py, line 859, in 
allocate_fixed_ip
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
'virtual_interface_id': vif['id']}
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp TypeError: 
'NoneType' object is unsubscriptable
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361180] Re: nova service disable/enable returns 500 on cell environment

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

** Changed in: nova/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361180

Title:
  nova service disable/enable returns 500 on cell environment

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  nova service disable/enable returns 500 on cell environment. Actual
  enable/disable looks processed correctly.

  It also throws following error in nova-api service:
  ValueError: invalid literal for int() with base 10: 'region!child@5'

  How to reproduce:

  $ nova --os-username admin service-list

  Output:
  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:17:36.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:17:29.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:17:30.00 | -   |
  | region!child@5 | nova-compute | region!child@ubuntu | nova | 
enabled | up| 2014-08-18T06:17:31.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:17:29.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled | down  | 2014-08-18T06:08:28.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:17:37.00 | -   |
  
++--+-+--+-+---++-+

  $ nova --os-username admin service-disable 'region!child@ubuntu' nova-
  compute

  The above command gives the following error:
  ERROR (ClientException): Unknown Error (HTTP 500)

  $ nova --os-username admin service-list

  Output:
  
++--+-+--+--+---++-+
  | Id | Binary   | Host| Zone | Status 
  | State | Updated_at | Disabled Reason |
  
++--+-+--+--+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled  | up| 2014-08-18T06:19:06.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled  | up| 2014-08-18T06:19:09.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled  | up| 2014-08-18T06:19:10.00 | -   |
  | region!child@5 | nova-compute | region!child@ubuntu | nova | 
disabled | up| 2014-08-18T06:19:11.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled  | up| 2014-08-18T06:19:09.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled  | down  | 2014-08-18T06:08:28.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled  | up| 2014-08-18T06:19:07.00 | -   |
  
++--+-+--+--+---++-+

  $ nova --os-username admin service-enable 'region!child@ubuntu' nova-compute
  The above command gives following error:
  ERROR (ClientException): Unknown Error (HTTP 500)

  $ nova --os-username admin service-list
  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:52:37.00 | -   |
  | region!child@3 | 

[Yahoo-eng-team] [Bug 1383617] Re: SAWarning contradiction IN-predicate on instances.uuid

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

** Changed in: nova/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383617

Title:
  SAWarning contradiction IN-predicate on instances.uuid

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  /usr/lib64/python2.7/site-
  packages/sqlalchemy/sql/default_comparator.py:35: SAWarning: The IN-
  predicate on instances.uuid was invoked with an empty sequence. This
  results in a contradiction, which nonetheless can be expensive to
  evaluate.  Consider alternative strategies for improved performance.

  The above warning reported in the n-cond (or n-cpu) log when using
  SQLAlchemy 0.9.8.

  The system doing an invain query at the end.

  The warning generated by this code part:
  
https://github.com/openstack/nova/blob/9fd059b938a2acca8bf5d58989c78d834fbb0ad8/nova/compute/manager.py#L696
  driver_uuids can be an empty list. In this case the sql query is not 
necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1383617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383305] Re: VMware: booting compute node with no hosts in cluster causes an exception

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383305

Title:
  VMware: booting compute node with no hosts in cluster causes an
  exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  2014-10-20 06:23:38.107 CRITICAL nova [-] AttributeError: 'Text'
  object has no attribute 'ManagedObjectReference'

  2014-10-20 06:23:38.107 TRACE nova Traceback (most recent call last):
  2014-10-20 06:23:38.107 TRACE nova   File /usr/local/bin/nova-compute, line 
10, in module
  2014-10-20 06:23:38.107 TRACE nova sys.exit(main())
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/cmd/compute.py, line 72, in main
  2014-10-20 06:23:38.107 TRACE nova db_allowed=CONF.conductor.use_local)
  2014-10-20 06:23:38.107 TRACE nova   File /opt/stack/nova/nova/service.py, 
line 275, in create
  2014-10-20 06:23:38.107 TRACE nova db_allowed=db_allowed)
  2014-10-20 06:23:38.107 TRACE nova   File /opt/stack/nova/nova/service.py, 
line 148, in __init__
  2014-10-20 06:23:38.107 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/compute/manager.py, line 633, in __init__
  2014-10-20 06:23:38.107 TRACE nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/virt/driver.py, line 1385, in load_compute_driver
  2014-10-20 06:23:38.107 TRACE nova virtapi)
  2014-10-20 06:23:38.107 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/oslo/utils/importutils.py, line 50, in 
import_object_ns
  2014-10-20 06:23:38.107 TRACE nova return 
import_class(import_value)(*args, **kwargs)
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 186, in __init__
  2014-10-20 06:23:38.107 TRACE nova self._update_resources()
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 381, in _update_resources
  2014-10-20 06:23:38.107 TRACE nova 
self.dict_mors.get(node)['cluster_mor'])
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/virt/vmwareapi/host.py, line 50, in __init__
  2014-10-20 06:23:38.107 TRACE nova self.update_status()
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/virt/vmwareapi/host.py, line 63, in update_status
  2014-10-20 06:23:38.107 TRACE nova self._cluster)
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/virt/vmwareapi/host.py, line 34, in 
_get_ds_capacity_and_freespace
  2014-10-20 06:23:38.107 TRACE nova ds = ds_util.get_datastore(session, 
cluster)
  2014-10-20 06:23:38.107 TRACE nova   File 
/opt/stack/nova/nova/virt/vmwareapi/ds_util.py, line 254, in get_datastore
  2014-10-20 06:23:38.107 TRACE nova data_store_mors = 
datastore_ret.ManagedObjectReference
  2014-10-20 06:23:38.107 TRACE nova AttributeError: 'Text' object has no 
attribute 'ManagedObjectReference'
  2014-10-20 06:23:38.107 TRACE nova 
  n-cpu failed to start

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1383305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415935] [NEW] encode_headers which has NoneType value would raise exception

2015-01-29 Thread hougangliu
Public bug reported:

refer to https://review.openstack.org/#/c/129289/

 if you're using Nova but don't have Nova configured with
auth_strategy='keystone' (see nova.image.glance._create_glance_client for 
details),
and when you resize(or other operation need call glance) an VM, it may call
glanceclient, and code may go to the logic like:
/usr/lib/python2.7/site-packages/glanceclient/v1/client.py(36)__init__()
- self.http_client = http.HTTPClient(endpoint, *args, **kwargs)
 /usr/lib/python2.7/site-packages/glanceclient/common/http.py(57)__init__()

and in 
/usr/lib/python2.7/site-packages/glanceclient/common/http.py(57)__init__()
 self.identity_headers = kwargs.get('identity_headers')  
 self.auth_token = kwargs.get('token')
and  the self.identity_headers may be like:
{'X-Service-Catalog': '[]', 'X-Auth-Token': None, 'X-Roles': u'admin', 
'X-Tenant-Id': None, 'X-User-Id': None, 'X-Identity-Status': 'Confirmed'}

and for https://review.openstack.org/#/c/136326/,
for the code:
if self.identity_headers:
for k, v in six.iteritems(self.identity_headers):
headers.setdefault(k, v)

headers would be like: {'X-Service-Catalog': '[]', 'X-Auth-Token': None,
'X-Roles': u'admin', 'X-Tenant-Id': None, 'X-User-Id': None, 'X
-Identity-Status': 'Confirmed', }

so headers = self.encode_headers(headers)  in 
/usr/lib/python2.7/site-packages/glanceclient/common/http.py(1957)__request()
would raise TypeError(NoneType can't be encoded), thus resize(or other 
operation need call glance) would fail.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1415935

Title:
  encode_headers which has NoneType value would raise exception

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  refer to https://review.openstack.org/#/c/129289/

   if you're using Nova but don't have Nova configured with
  auth_strategy='keystone' (see nova.image.glance._create_glance_client for 
details),
  and when you resize(or other operation need call glance) an VM, it may call
  glanceclient, and code may go to the logic like:
  /usr/lib/python2.7/site-packages/glanceclient/v1/client.py(36)__init__()
  - self.http_client = http.HTTPClient(endpoint, *args, **kwargs)
   /usr/lib/python2.7/site-packages/glanceclient/common/http.py(57)__init__()

  and in 
/usr/lib/python2.7/site-packages/glanceclient/common/http.py(57)__init__()
   self.identity_headers = kwargs.get('identity_headers')  
   self.auth_token = kwargs.get('token')
  and  the self.identity_headers may be like:
  {'X-Service-Catalog': '[]', 'X-Auth-Token': None, 'X-Roles': u'admin', 
'X-Tenant-Id': None, 'X-User-Id': None, 'X-Identity-Status': 'Confirmed'}

  and for https://review.openstack.org/#/c/136326/,
  for the code:
  if self.identity_headers:
  for k, v in six.iteritems(self.identity_headers):
  headers.setdefault(k, v)

  headers would be like: {'X-Service-Catalog': '[]', 'X-Auth-Token':
  None, 'X-Roles': u'admin', 'X-Tenant-Id': None, 'X-User-Id': None, 'X
  -Identity-Status': 'Confirmed', }

  so headers = self.encode_headers(headers)  in 
/usr/lib/python2.7/site-packages/glanceclient/common/http.py(1957)__request()
  would raise TypeError(NoneType can't be encoded), thus resize(or other 
operation need call glance) would fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1415935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331885] Re: Glance service doesn't support ipv6

2015-01-29 Thread Chuck Short
** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Changed in: glance/juno
   Status: New = Fix Committed

** Changed in: glance/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1331885

Title:
  Glance service doesn't support ipv6

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance juno series:
  Fix Committed
Status in glance package in Ubuntu:
  Fix Released

Bug description:
  OS: ubuntu
  Version: icehouse

  Glance can't listen on the ipv6 address if it is setup on a separate node 
from nova-controller , and report below error:
   log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1951
  2014-06-18 16:53:55.567 21489 CRITICAL glance [-] gaierror: (-5, 'No address 
associated with hostname')
  2014-06-18 16:53:55.567 21489 TRACE glance Traceback (most recent call last):
  2014-06-18 16:53:55.567 21489 TRACE glance   File /usr/bin/glance-api, line 
10, in module
  2014-06-18 16:53:55.567 21489 TRACE glance sys.exit(main())
  2014-06-18 16:53:55.567 21489 TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/cmd/api.py, line 63, in main
  2014-06-18 16:53:55.567 21489 TRACE glance 
server.start(config.load_paste_app('glance-api'), default_port=9292)
  2014-06-18 16:53:55.567 21489 TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 233, in start
  2014-06-18 16:53:55.567 21489 TRACE glance self.sock = 
get_socket(default_port)
  2014-06-18 16:53:55.567 21489 TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 123, in 
get_socket
  2014-06-18 16:53:55.567 21489 TRACE glance socket.SOCK_STREAM)
  2014-06-18 16:53:55.567 21489 TRACE glance   File 
/usr/lib/python2.7/dist-packages/eventlet/support/greendns.py, line 174, in 
getaddrinfo
  2014-06-18 16:53:55.567 21489 TRACE glance rrset = resolve(host)
  2014-06-18 16:53:55.567 21489 TRACE glance   File 
/usr/lib/python2.7/dist-packages/eventlet/support/greendns.py, line 133, in 
resolve
  2014-06-18 16:53:55.567 21489 TRACE glance raise socket.gaierror(error)
  2014-06-18 16:53:55.567 21489 TRACE glance gaierror: (-5, 'No address 
associated with hostname')

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1331885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383973] Re: image data cannot be removed when deleting a saving status image

2015-01-29 Thread Chuck Short
*** This bug is a duplicate of bug 1398830 ***
https://bugs.launchpad.net/bugs/1398830

** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Changed in: glance/juno
   Status: New = Fix Committed

** Changed in: glance/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1383973

Title:
  image data cannot be removed when deleting a saving status image

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance juno series:
  Fix Committed

Bug description:
  The image data in /var/lib/glance/images/ cannot be removed when I
  delete a image that status is saving.

  1. create a image
   glance image-create --name image-v1 --disk-format raw --container-format 
bare --file xx.image --is-public true

  2. list the created image, the status is saving
  [root@node2 ~]# glance image-list
  
+--+--+-+--+--++
  | ID   | Name | Disk Format | Container 
Format | Size | Status |
  
+--+--+-+--+--++
  | 00ec3d8d-41a5-4f7c-9448-694099a39bcf | image-v1 | raw | bare
 | 18   | saving |
  
+--+--+-+--+--++

  3. delete the created image
  glance image-delete image-v1

  4. the image has been deleted but the image data still exists
  [root@node2 ~]# glance image-list
  ++--+-+--+--++
  | ID | Name | Disk Format | Container Format | Size | Status |
  ++--+-+--+--++
  ++--+-+--+--++

  [root@node2 ~]# ls /var/lib/glance/images
  00ec3d8d-41a5-4f7c-9448-694099a39bcf

  This problem exists in both v1 and v2 API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1383973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405044] Re: [GPFS] nova volume-attach a gpfs volume with an error log in nova-compute

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405044

Title:
  [GPFS] nova volume-attach a gpfs volume with an error log in nova-
  compute

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  When I attached a gpfs volume to an instance, the volume has been
  successfully attached to the instance, but  there were some error logs
  in nova-compute log file as below:

  2014-12-22 21:52:10.863 13396 ERROR nova.openstack.common.threadgroup [-] 
Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 
/gpfs/volume-98520c4e-935d-43d8-9c8d-00fcb54bb335
  Exit code: 1
  Stdout: u''
  Stderr: u'BLKGETSIZE64: Inappropriate ioctl for device\n'
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py, line 
125, in wait
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py, line 
47, in wait
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line 173, in wait
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 121, in wait
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 293, in switch
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line 212, in main
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/service.py, line 490, 
in run_service
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/service.py, line 181, in start
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1159, in 
pre_start_hook
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 6037, in 
update_available_resource
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
nodenames = set(self.driver.get_available_nodes())
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/driver.py, line 1237, in 
get_available_nodes
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
stats = self.get_host_stats(refresh=refresh)
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 5794, in 
get_host_stats
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self.host_state.get_host_stats(refresh=refresh)
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 473, in 
host_state
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self._host_state = HostState(self)
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 6360, in 
__init__
  2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self.update_status()
 

[Yahoo-eng-team] [Bug 1381414] Re: Unit test failure AssertionError: Expected to be called once. Called 2 times. in test_get_port_vnic_info_3

2015-01-29 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381414

Title:
  Unit test failure AssertionError: Expected to be called once. Called
  2 times. in test_get_port_vnic_info_3

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  This looks to be due to tests test_get_port_vnic_info_2 and 3 sharing
  some code and is easily reproduced by running these two tests alone
  with no concurrency.

  ./run_tests.sh --concurrency 1 test_get_port_vnic_info_2
  test_get_port_vnic_info_3

  The above always results in:

  Traceback (most recent call last):
File /home/hans/nova/nova/tests/network/test_neutronv2.py, line 2615, in 
test_get_port_vnic_info_3
  self._test_get_port_vnic_info()
File /home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py, 
line 1201, in patched
  return func(*args, **keywargs)
File /home/hans/nova/nova/tests/network/test_neutronv2.py, line 2607, in 
_test_get_port_vnic_info
  fields=['binding:vnic_type', 'network_id'])
File /home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py, 
line 845, in assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected to be called once. Called 2 times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415925] [NEW] Horizon doesn't timeout promptly when services can't be reached

2015-01-29 Thread Doug Fish
Public bug reported:

If network connectivity is lost between Horizon and the other services
it can take quite a while (between 1 minute and forever) before
Horizon returns with any timeout message.  Each of the multiple API
calls it takes to render a page has to timeout serially before the page
is returned.

I think the fix should involve passing a timeout value to the python
clients.   I've surveyed a few and they support it (I assume they all
do).  In an environment where this may be a problem a service_timeout
value can be configured and passed in to the client.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415925

Title:
  Horizon doesn't timeout promptly when services can't be reached

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If network connectivity is lost between Horizon and the other services
  it can take quite a while (between 1 minute and forever) before
  Horizon returns with any timeout message.  Each of the multiple API
  calls it takes to render a page has to timeout serially before the
  page is returned.

  I think the fix should involve passing a timeout value to the python
  clients.   I've surveyed a few and they support it (I assume they all
  do).  In an environment where this may be a problem a service_timeout
  value can be configured and passed in to the client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404945] Re: Default gateway can vanish from HA routers, destroying external connectivity for all VMs on that network

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404945

Title:
  Default gateway can vanish from HA routers, destroying external
  connectivity for all VMs on that network

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  The default gateway can vanish from the HA router namespace after
  certain operations.

  My setup:
  Fedora 20
  keepalived-1.2.13-1.fc20.x86_64
  Network manager turned off.

  I can reproduce this reliably on my system, but cannot reproduce this
  on a RHEL 7 system. Even on that system, the issue manifests on its
  own, I just can't reproduce it at will.

  How I reproduce on my system:
  Create an HA router
  Set it as a gateway
  Go to the master instance
  Observe that the namespace has a default gateway
  Add an internal interface (Make sure that the IP is 'lower' than the IP of 
the external interface, this is explained below)
  Default gateway will no longer exist

  Cause:
  keepalived.conf has two sections for VIPs: virtual_ipaddress, and 
virtual_ipaddress_excluded. The difference is that any VIPs that go in the 
first section will be propagated on the wire, and any VIPs in the excluded 
section do not. Traditional configuration of keepalived places one VIP in the 
normal section, henceforth known as the 'primary VIP', and all other VIPs in 
the excluded section. Currently the keepalived manager does this by sorting the 
VIPs (Internal IPs, external SNAT IP, and all floating IPs), placing the lowest 
one (By string comparison) as the primary, and the rest of the VIPs in the 
excluded section: 
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/keepalived.py#L155

  That code is ran, and keepalived.conf is built when ever a router is
  updated. This means that the primary VIP can change on router updates.
  As it turns out, after a conversation with a keepalived developer,
  keepalived assumes that the order does not change (This is possibly a
  keepalived bug, depending on your view on life, the ordering of the
  stars when keepalived is executed and the wind speed in the Falkland
  Islands in the past leap year). On my system, with the currently
  installed keepalived version, whenever the primary VIP changes, the
  default gateway (Present in the virtual_routes section of
  keepalived.conf) is violently removed.

  Possible solution:
  Make sure that the primary VIP never changes. For example: Fabricate an IP 
per HA router cluster (Derived from the VRID?), add it as a VIP on the HA 
device, configure it as the primary VIP. I played around with a hacky variation 
of this solution and I could no longer reproduce the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415971] [NEW] Moving '_make_firewall_dict_with_rules' to firewall_db.py from fwaas_plugin.py

2015-01-29 Thread Trinath Somanchi
Public bug reported:

The helper function _make_firewall_dict_with_rules exists  in
fwaas_plugin.py  and is moved to firewall_db.py

The above change is to have all the dict helper functions at one place.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415971

Title:
  Moving '_make_firewall_dict_with_rules' to firewall_db.py  from
  fwaas_plugin.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The helper function _make_firewall_dict_with_rules exists  in
  fwaas_plugin.py  and is moved to firewall_db.py

  The above change is to have all the dict helper functions at one
  place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1415971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402458] Re: apic driver doesn't bind services' ports

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1402458

Title:
  apic driver doesn't bind services' ports

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  APIC mechanism driver shouldn't filter port binding by owner, this
  causes the services' ports to be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1402458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394043] Re: KeyError: 'gw_port_host' seen for DVR router removal

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394043

Title:
  KeyError: 'gw_port_host' seen for DVR router removal

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron juno series:
  Fix Committed

Bug description:
  In some multi-node setups, a qrouter namespace might be hosted on a
  node where only a dhcp port is hosted (no VMs, no SNAT).

  When the router is removed from the db, the host with only the qrouter
  and dhcp namespace will have the qrouter namespace remain.  Other
  hosts with the same qrouter will remove the namespace.  The following
  KeyError is seen on the host with the remaining namespace -

  2014-11-18 17:18:43.334 ERROR neutron.agent.l3_agent [-] 'gw_port_host'
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent 
call last):
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/common/utils.py, line 341, in call
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent return func(*args, 
**kwargs)
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in process_router
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in 
external_gateway_removed
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
ri.router['gw_port_host'] == self.host):
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent KeyError: 'gw_port_host'
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py, line 
82, in _spawn_n_impl
  func(*args, **kwargs)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1842, in 
_process_router_update
  self._process_router_if_compatible(router)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1817, in 
_process_router_if_compatible
  self.process_router(ri)
File /opt/stack/neutron/neutron/common/utils.py, line 344, in call
  self.logger(e)
File /opt/stack/neutron/neutron/openstack/common/excutils.py, line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/neutron/neutron/common/utils.py, line 341, in call
  return func(*args, **kwargs)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in 
process_router
  self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in 
external_gateway_removed
  ri.router['gw_port_host'] == self.host):
  KeyError: 'gw_port_host'

  For the issue to be seen, the router in question needs to have the
  router-gateway-set previously.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372883] Re: DHCP agent should specify prefix-len for IPv6 dhcp-range's

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372883

Title:
  DHCP agent should specify prefix-len for IPv6 dhcp-range's

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  If Network contains Subnet smaller than /64, prefix-len should be
  specified for dnsmasq's --dhcp-range option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389880] Re: VM loses connectivity on floating ip association when using DVR

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = ongoing

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: ongoing = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389880

Title:
  VM loses connectivity on floating ip association when using DVR

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  
  Presence: Juno 2014.2-1 RDO , ubuntu 12.04
  openvswitch version on ubuntu is 2.0.2


  Description:

  Whenever create FIP on a VM, it adds the FIP to ALL other compute nodes, a 
routing prefix in the FIP namespace, and IP interface alias on the qrouter.
  However, the iptables gets updated normally with only the DNAT for the 
particular IP of the VM on that compute node
  This causes the FIP proxy arp to answer ARP requests for ALL VM's on ALL 
compute nodes which results in compute nodes answering ARPs where they do not 
have
  the VM effectively blackholing traffic to that ip.

  
   
  Here is a demonstration of the problem:

  
  Before  adding a vm+fip on compute4

  [root@compute2 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
  default via 173.209.44.1 dev fg-6ede0596-3a
  169.254.31.28/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.29
  173.209.44.0/24 dev fg-6ede0596-3a  proto kernel  scope link  src 
173.209.44.6
  173.209.44.4 via 169.254.31.28 dev fpr-3a90aae6-3


  [root@compute3 neutron]# ip netns exec 
fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show
  default via 173.209.44.1 dev fg-26bef858-6b
  169.254.31.238/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.239
  173.209.44.0/24 dev fg-26bef858-6b  proto kernel  scope link  src 
173.209.44.5
  173.209.44.3 via 169.254.31.238 dev fpr-3a90aae6-3


  [root@compute4 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
  default via 173.209.44.1 dev fg-2919b6be-f4
  173.209.44.0/24 dev fg-2919b6be-f4  proto kernel  scope link  src 
173.209.44.8


  after creating a new vm on compute4 and attaching a floating IP to it, we get 
this result.
  of course at this point, only the vm on compute4 is able to ping the public 
network 


  [root@compute2 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
  default via 173.209.44.1 dev fg-6ede0596-3a
  169.254.31.28/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.29
  173.209.44.0/24 dev fg-6ede0596-3a  proto kernel  scope link  src 
173.209.44.6
  173.209.44.4 via 169.254.31.28 dev fpr-3a90aae6-3
  173.209.44.7 via 169.254.31.28 dev fpr-3a90aae6-3


  [root@compute3 neutron]# ip netns exec 
fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show
  default via 173.209.44.1 dev fg-26bef858-6b
  169.254.31.238/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.239
  173.209.44.0/24 dev fg-26bef858-6b  proto kernel  scope link  src 
173.209.44.5
  173.209.44.3 via 169.254.31.238 dev fpr-3a90aae6-3
  173.209.44.7 via 169.254.31.238 dev fpr-3a90aae6-3


  [root@compute4 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
  default via 173.209.44.1 dev fg-2919b6be-f4
  169.254.30.20/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.30.21
  173.209.44.0/24 dev fg-2919b6be-f4  proto kernel  scope link  src 
173.209.44.8
  173.209.44.3 via 169.254.30.20 dev fpr-3a90aae6-3
  173.209.44.4 via 169.254.30.20 dev fpr-3a90aae6-3
  173.209.44.7 via 169.254.30.20 dev fpr-3a90aae6-3


   **when we deleted the extra FIP from each Compute Nodes Namespace,
  everything starts to work just fine**


   
  Following are the router, floating IP information and config files : 

  
+---+--+
  | Field | Value   

 |
  
+---+--+
  | admin_state_up| True

 |
  | distributed   | True  

[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Cinder juno series:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance icehouse series:
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Data Processing (Sahara):
  New

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print RESPONSE %s-%d % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net

[Yahoo-eng-team] [Bug 1358709] Re: SLAAC IPv6 addressing doesn't work with more than one subnet

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358709

Title:
  SLAAC IPv6 addressing doesn't work with more than one subnet

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  When network has more than one IPv6 SLAAC (or dhcp-stateless) subnets,
  the port receives SLAAC address only from first one, the second
  address is from fixed IPs range.

  Scenario:
  1) create a network and two SLAAC subnets:
  ~$ neutron net-create net12
  ~$ neutron subnet-create net12 --ipv6-ra-mode=slaac --ipv6-address-mode=slaac 
--ip-version=6 2003::/64
  | allocation_pools  | {start: 2003::2, end: 
2003:::::fffe} |
  | cidr  | 2003::/64   
 |
  | dns_nameservers   | 
 |
  | enable_dhcp   | True
 |
  | gateway_ip| 2003::1 
 |
  | host_routes   | 
 |
  | id| 220b7e4e-b30a-4d5c-847d-58df72bf7e8d
 |
  | ip_version| 6   
 |
  | ipv6_address_mode | slaac   
 |
  | ipv6_ra_mode  | slaac   
 |
  | name  | 
 |
  | network_id| 4cfe1699-a10d-4706-bedb-5680cb5cf27f
 |
  | tenant_id | 834b2e7732cb4ad4b3df81fe0b0ea906
 |

  ~$ neutron subnet-create --name=additional net12 --ipv6-ra-mode=slaac 
--ipv6-address-mode=slaac --ip-version=6 2004::/64
  | allocation_pools  | {start: 2004::2, end: 
2004:::::fffe} |
  | cidr  | 2004::/64   
 |
  | dns_nameservers   | 
 |
  | enable_dhcp   | True
 |
  | gateway_ip| 2004::1 
 |
  | host_routes   | 
 |
  | id| e48e5d96-565f-45b1-8efc-4634d3ed8bf8
 |
  | ip_version| 6   
 |
  | ipv6_address_mode | slaac   
 |
  | ipv6_ra_mode  | slaac   
 |
  | name  | additional  
 |
  | network_id| 4cfe1699-a10d-4706-bedb-5680cb5cf27f
 |
  | tenant_id | 834b2e7732cb4ad4b3df81fe0b0ea906
 |

  Now let's create port in this network:

  ~$ neutron port-create net12
  Created a new port:
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:vnic_type | normal  
 |
  | device_id | 
 |
  | device_owner  | 
 |
  | fixed_ips | {subnet_id: 
220b7e4e-b30a-4d5c-847d-58df72bf7e8d, ip_address: 
2003::f816:3eff:fe55:6297} |
  |   | {subnet_id: 
e48e5d96-565f-45b1-8efc-4634d3ed8bf8, ip_address: 2004::2}
   |
  | id| 12c29fd4-1c68-4aea-88c6-b89d73ebac2c
 |
  | mac_address   | fa:16:3e:55:62:97   
 |
  | name  | 
 |
  | network_id   

[Yahoo-eng-team] [Bug 1398312] Re: iptables for secgroup not be set properly when set --no-security-group

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398312

Title:
  iptables for secgroup not be set properly when set --no-security-group

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  In the lastest code, iptables for secgroup not be set properly when
  set --no-security-group.

  steps:

  1. edit the 'default' secgroup, and add one rule for icmp.

  #neutron security-group-rule-create --direction ingress --protocol icmp 
--port_range_min 0 --port_range_max 255 4db9f9f6-641a-4482-af04-c64628d42b6
  a

  there will be one rule added for the ingress port iptale.

  Chain neutron-openvswi-i5edf1431-d (1 references)
   pkts bytes target prot opt in out source   
destination
  ...
  0 0 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0
  ...

  2.  remove the sec group of the port.

  #neutron port-update 5edf1431-dd9e-4a1c-995b-c6155152483f  --no-
  security-group

  I expect the rule created in step1 will be deleted which is created in
  step1, but not.

  3.  after reboot the ovs-agent, all the chain and rules about the port
  5edf1431-dd9e-4a1c-995b-c6155152483f will be removed,  for example,
  rules in  neutron-openvswi-sg-chain, and including the auti-spoof
  chain,

  I think it is because security_group_info_for_devices will return
  nothing if the sec-group is empty, instead of returning a dict with
  empty [sec-group-rules].

  I am not sure if it's a bug, experts could help here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398566] Re: REST API relies on policies being initialized after RESOURCE_ATTRIBUTE_MAP is processed, does nothing to ensure it.

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398566

Title:
  REST API relies on policies being initialized after
  RESOURCE_ATTRIBUTE_MAP is processed, does nothing to ensure it.

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  A race condition exists where policies may be loaded and processed
  before the neutron extensions  are loaded and the
  RESOURCE_ATTRIBUTE_MAP is populated. This causes problems in system
  behaviour dependent on neutron specific policy checks. Policies are
  loaded at on demand, and if the call instigating the loading of
  policies happens prematurely this can  cause certain neutron specific
  policy checks to not be setup properly as the required mappings from
  policy to check implementations has not been established.

  Related bugs:

  https://bugs.launchpad.net/neutron/+bug/1254555
  https://bugs.launchpad.net/neutron/+bug/1251982
  https://bugs.launchpad.net/neutron/+bug/1280738

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397022] Re: Instances won't obtain additional configuration options from DHCP when using stateless DHCPv6 subnets

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397022

Title:
  Instances won't obtain additional configuration options from DHCP when
  using stateless DHCPv6 subnets

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  When additional DHCP configuration is available from DHCP server,
  radvd should set Other flag in RAs. It's not done by L3 agent, so
  clients are left unaware about additional configuration and don't
  spawn dhcp clients to receive those. This results in e.g. DNS
  nameservers set for a subnet not propagated into instance's
  /etc/resolv.conf, among other things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408297] Re: DHCP agent fails to match IPv6 clients when used with dnsmasq 2.67

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408297

Title:
  DHCP agent fails to match IPv6 clients when used with dnsmasq  2.67

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  DHCP agent fails to match IPv6 clients when used with dnsmasq  2.67.

  This is because MAC address matching support for IPv6 was added in
  2.67 only.

  We should bump minimal dnsmasq version in DHCP agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1408297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401751] Re: updating ipv6 allocation pool start ip address made neutron-server hang

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401751

Title:
  updating ipv6 allocation pool start ip address made neutron-server
  hang

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  neutron subnet-update --allocation-pool
  start=2001:470:1f0e:cb4::20,end=2001:470:1f0e:cb4::::fffe
  ipv6

  
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.024 21692 DEBUG neutron.api.v2.base 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Request body: {u'subnet': 
{u'allocation_pools': [{u'start': u'2001:470:1f0e:cb4::20', u'end': 
u'2001:470:1f0e:cb4::::fffe'}]}} prepare_request_body 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/api/v2/base.py:585
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.055 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Performing IP validity checks 
on allocation pools _validate_allocation_pools 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:639
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.058 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Checking for overlaps among 
allocation pools and gateway ip _validate_allocation_pools 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:675
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.061 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Rebuilding availability ranges 
for subnet {'ip_version': 6L, u'allocation_pools': [{u'start': 
u'2001:470:1f0e:cb4::20', u'end': u'2001:470:1f0e:cb4::::fffe'}], 
'cidr': u'2001:470:1f0e:cb4::/64', 'id': 
u'5579d9bb-0d03-4d8e-ba61-9b2d8842983d'} _rebuild_availability_ranges 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:262

  
   wget 162.3.121.66:9696
  --2014-12-12 04:24:18--  http://162.3.121.66:9696/
  Connecting to 162.3.121.66:9696... connected.
  HTTP request sent, awaiting response... 



  restart the neutron-server service, neutron-server got back to normal
  and other neutron command still worked, but neutron subnet-update
  allocation pool would reproduce the bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381238] Re: Race condition on processing DVR floating IPs

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381238

Title:
  Race condition on processing DVR floating IPs

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  A race condition can sometimes occur in l-3 agent when a dvr based
  floatingip is being deleted from one router and another dvr based
  floatingip is being configured on another router in the same node.
  Especially if the floatingip being deleted was the last floatingip on
  the node.  Although fix for Bug # 1373100 [1] eliminated frequent
  observation of this behavior in upstream tests, it still shows up.
  Couple of recent examples:

  http://logs.openstack.org/88/128288/1/check/check-tempest-dsvm-
  neutron-dvr/8fdd1de/

  http://logs.openstack.org/03/123403/7/check/check-tempest-dsvm-
  neutron-dvr/859534a/

  Relevant log messages:

  2014-10-14 16:06:15.803 22303 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-82fb2751-30ba-4015-a5da-6c8563064db9', 'ip', 'link', 'del', 
'fpr-7ed86ca6-b'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:46
  2014-10-14 16:06:15.838 22303 DEBUG neutron.agent.linux.utils [-] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-7ed86ca6-b42d-4ba9-8899-447ff0509174', 'ip', 'addr', 'show', 
'rfp-7ed86ca6-b']
  Exit code: 0
  Stdout: '2: rfp-7ed86ca6-b: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000\nlink/ether c6:88:ee:71:a7:51 
brd ff:ff:ff:ff:ff:ff\ninet 169.254.30.212/31 scope global rfp-7ed86ca6-b\n 
  valid_lft forever preferred_lft forever\ninet6 
fe80::c488:eeff:fe71:a751/64 scope link \n   valid_lft forever 
preferred_lft forever\n'
  Stderr: '' execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:81
  2014-10-14 16:06:15.839 22303 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-7ed86ca6-b42d-4ba9-8899-447ff0509174', 'ip', '-4', 'addr', 'add', 
'172.24.4.91/32', 'brd', '172.24.4.91', 'scope', 'global', 'dev', 
'rfp-7ed86ca6-b'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:46
  2014-10-14 16:06:16.221 22303 DEBUG neutron.agent.linux.utils [-] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-82fb2751-30ba-4015-a5da-6c8563064db9', 'ip', 'link', 'del', 
'fpr-7ed86ca6-b']
  Exit code: 0
  Stdout: ''
  Stderr: '' execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:81
  2014-10-14 16:06:16.222 22303 DEBUG neutron.agent.l3_agent [-] DVR: unplug: 
fg-f04e25ef-e3 _destroy_fip_namespace 
/opt/stack/new/neutron/neutron/agent/l3_agent.py:679
  2014-10-14 16:06:16.222 22303 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'br-ex'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:46
  2014-10-14 16:06:16.251 22303 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-7ed86ca6-b42d-4ba9-8899-447ff0509174', 'ip', '-4', 'addr', 'add', 
'172.24.4.91/32', 'brd', '172.24.4.91', 'scope', 'global', 'dev', 
'rfp-7ed86ca6-b']
  Exit code: 1
  Stdout: ''
  Stderr: 'Cannot find device rfp-7ed86ca6-b\n'  


  [1] https://bugs.launchpad.net/neutron/+bug/1373100

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393925] Re: Race condition adding a security group rule when another is in-progress

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393925

Title:
  Race condition adding a security group rule when another is in-
  progress

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  I've come across a race condition where I sometimes see a security
  group rule is never added to iptables, if the OVS agent is in the
  middle of applying another security group rule when the RPC arrives.

  Here's an example scenario:

  nova boot --flavor 1 --image $nova_image  dev_server1
  sleep 4
  neutron security-group-rule-create --direction ingress --protocol tcp 
--port_range_min  --port_range_max  default
  neutron security-group-rule-create --direction ingress --protocol tcp 
--port_range_min 1112 --port_range_max 1112 default

  Wait for VM to complete booting, then check iptables:

  $ sudo iptables-save | grep 111
  -A neutron-openvswi-i741ff910-1 -p tcp -m tcp --dport  -j RETURN

  The second rule is missing, and will only get added if you either add
  another rule, or restart the agent.

  My config is just devstack, running with the latest openstack bits as
  of today.  OVS agent w/vxlan and DVR enabled, nothing fancy.

  I've been able to track this down to the following code (i'll attach
  the complete log as a file due to line wraps):

  OVS agent receives RPC to setup port
  Port info is gathered for devices and filters for security groups are 
created
  Iptables apply is called
  New security group rule is added, triggering RPC message
  RPC received, and agent seems to add device to list that needs refresh

  Security group rule updated on remote: 
[u'5f0f5036-d14c-4b57-a855-ed39deaea256'] security_groups_rule_updated
  Security group rule updated 
[u'5f0f5036-d14c-4b57-a855-ed39deaea256']
  Adding [u'741ff910-12ba-4c1e-9dc9-38f7cbde0dc4'] devices to the 
list of devices for which firewall needs to be refreshed _security_group_updated

  Iptables apply is finished

  rpc_loop() in OVS agent does not notice there is more work to do on
  next loop, so rule never gets added

  At this point I'm thinking it could be that self.devices_to_refilter
  is modified in both _security_group_updated() and setup_port_filters()
  without any lock/semaphore, but the log doesn't explicity implicate it
  (perhaps we trust the timestamps too much?).

  I will continue to investigate, but if someone has an aha! moment
  after reading this far please add a note.

  A colleague here has also been able to duplicate this on his own
  devstack install, so it wasn't my fat-fingering that caused it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377156] Re: fg- device is not deleted after the deletion of the last VM on the compute node

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377156

Title:
  fg- device is not deleted after the deletion of the last VM on the
  compute node

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  The external gateway port in the fip- namespace on a compute node is
  not removed after the user deleted the last VM running on the node.

  How to reproduce the problem:

  1. SETUP:
   * Use devstack to start up the controller node.  In local.conf, 
Q_DVR_MODE=dvr_snat.
   * Use devstack to setup a compute node.  In local.conf, Q_DVR_MODE=dvr.

  At the start, there are no VMs hosted on the compute node.  The fip
  namespace hasn't been created yet.

  1. Create a network and subnet
  2. Create a router and dd the subnet to the router
  3. Tie the router to the external network
  4. Boot up a VM using the network, and assign it a floatingip
  5. Ping the floating IP (make sure you open up your SG)
  6. Note the fg- device in the fip namespace on the compute node
  7. Now delete the VM

  Expected results:

  - The VM is deleted.
  - Neutron port-list shows the gateway port is also deleted.
  - The FIP namespace is also cleared

  Experienced results:

  - The fg- device still remains in the fip namespace on the compute
  node and the fip namespace isn't removed.

  For detailed command sequence, see:

  http://paste.openstack.org/show/118174/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377307] Re: Metadata host route added when DVR and isolated metadata enabled

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377307

Title:
  Metadata host route added when DVR and isolated metadata enabled

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  When DVR is enabled and enable_isolated_metadata=True in
  dhcp_agent.ini, the agent should only inject a metadata host route
  when there is no gateway on the subnet.  But it does it all the time:

  $ ip r
  default via 10.0.0.1 dev eth0
  10.0.0.0/24 dev eth0  src 10.0.0.5
  169.254.169.254 via 10.0.0.4 dev eth0

  The opts file for dnsmasq confirms it was the Neutron code that
  configured this.

  The code in neutron/agent/linux/dhcp.py:get_isolated_subnets() is only
  looking at ports where the device_owner field is
  DEVICE_OWNER_ROUTER_INTF, it also needs to look for
  DEVICE_OWNER_DVR_INTERFACE.  Simlar changes have been made in other
  code.

  Making that simple change fixes the problem:

  $ ip r
  default via 10.0.0.1 dev eth0
  10.0.0.0/24 dev eth0  src 10.0.0.5

  I have a patch I'll get out for this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374461] Re: potential lock wait timeout issue when creating a HA router

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
Milestone: None = 2014.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374461

Title:
  potential lock wait timeout issue when creating a HA router

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  Currently the failures during the creation of resources related to the
  creation of a HA router are handled my a try/except to avoid a
  potential lock wait timeout. This has been done in order to keep the
  RPC calls outside the transactions.

  All the related resources are created in the _create_router_db but
  this method is called inside a transaction which is started is the
  create_router method. Moreover the try/except mechanism used to
  rollback the router creation will not work since we are in a already
  opened transaction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405379] Re: StaleDataError: UPDATE statement on table 'ports' expected to update 1 row(s); 0 were matched.

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405379

Title:
  StaleDataError: UPDATE statement on table 'ports' expected to update 1
  row(s); 0 were matched.

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  For some reason sometimes at the same time for the same port
  delete_port and update_device_down commands have been executed. This
  arise StaleDataError in update_device_down. In other situations  these
  commands are executed one after another and error does't appear.

  neutron-server log with trace: http://paste.openstack.org/show/154358/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407410] Re: It takes too much time for L3 agent to make router active ( about 2 minutes ).

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1407410

Title:
  It takes too much time for L3 agent to make router active ( about 2
  minutes ).

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  It takes too much time  for L3 agent to make router active ( about 2 minutes 
).
  This is especially not good for HA setups. 

  openstack-neutron-2014.2.1-5.el7ost.noarch
   
  Reproduced on non HA :
   
  1.openstack-service stop neutron-openvswitch-agent | openstack-service stop 
neutron-l3-agent |  openstack-service stop neutron-dhcp-agent
   
  2.neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/dhcp_agent.ini  
--config-file=/etc/neutron/l3_agent.ini  --force  /var/log/neutron/l3-agent.log
   
  3.openstack-service start neutron-openvswitch-agent | openstack-service start 
neutron-l3-agent |  openstack-service start neutron-dhcp-agent
   
  4.wait until namespace is created.
  

 
  [root@networker ~]# cat  /var/log/neutron/l3-agent.log
  2015-01-01 07:53:49.091 4412 INFO neutron.common.config [-] Logging enabled!  
  2015-01-01 07:53:49.095 4412 DEBUG neutron.common.utils [-] Reloading cached 
file /etc/neutron/policy.json read_cached_file 
/usr/lib/python2.7/site-packages/neutron/common/utils.py:118
  2015-01-01 07:53:49.095 4412 DEBUG neutron.policy [-] Loading policies 
from file: /etc/neutron/policy.json _set_rules 
/usr/lib/python2.7/site-packages/neutron/policy.py:91
  2015-01-01 07:53:49.102 4412 DEBUG neutron.common.rpc 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] 
neutron.agent.l3_agent.L3PluginApi method call called with arguments 
(neutron.context.ContextBase object at 0x2d9b4d0, {'args': {}, 'namespace': 
None, 'method': 'get_service_plugin_list'}) {'topic': 'q-l3-plugin', 'version': 
'1.3'} wrapper /usr/lib/python2.7/site-packages/neutron/common/log.py:33
  2015-01-01 07:53:49.104 4412 INFO oslo.messaging._drivers.impl_rabbit 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 ] Connecting to AMQP server on 
10.35.187.88:5672
  2015-01-01 07:53:49.131 4412 INFO oslo.messaging._drivers.impl_rabbit 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 ] Connected to AMQP server on 
10.35.187.88:5672
  2015-01-01 07:53:49.139 4412 INFO oslo.messaging._drivers.impl_rabbit 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 ] Connecting to AMQP server on 
10.35.187.88:5672
  2015-01-01 07:53:49.161 4412 INFO oslo.messaging._drivers.impl_rabbit 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 ] Connected to AMQP server on 
10.35.187.88:5672
  2015-01-01 07:53:49.198 4412 DEBUG 
neutron.services.firewall.agents.l3reference.firewall_l3_agent 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] Initializing firewall agent 
__init__ 
/usr/lib/python2.7/site-packages/neutron/services/firewall/agents/l3reference/firewall_l3_agent.py:58
  2015-01-01 07:53:49.200 4412 DEBUG 
neutron.services.firewall.drivers.linux.iptables_fwaas 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] Initializing fwaas iptables 
driver __init__ 
/usr/lib/python2.7/site-packages/neutron/services/firewall/drivers/linux/iptables_fwaas.py:49
  2015-01-01 07:53:49.200 4412 DEBUG 
neutron.services.firewall.agents.l3reference.firewall_l3_agent 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] FWaaS Driver Loaded: 
'neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver' 
__init__ 
/usr/lib/python2.7/site-packages/neutron/services/firewall/agents/l3reference/firewall_l3_agent.py:80
  2015-01-01 07:53:49.206 4412 DEBUG neutron.openstack.common.service 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] Full set of CONF: 
_wait_for_exit_or_signal 
/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py:167
  2015-01-01 07:53:49.206 4412 DEBUG neutron.openstack.common.service 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] 

 log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1983
  2015-01-01 07:53:49.207 4412 DEBUG neutron.openstack.common.service 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] Configuration options gathered 
from: log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1984
  2015-01-01 07:53:49.207 4412 DEBUG neutron.openstack.common.service 
[req-f925a4bf-9862-4e46-81eb-6695016b0c21 None] command line args: 
['--config-file', '/usr/share/neutron/neutron-dist.conf', '--config-file', 
'/etc/neutron/neutron.conf', '--config-file', 

[Yahoo-eng-team] [Bug 1413156] Re: Duplicated messages in agent's fanout topic

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413156

Title:
  Duplicated messages in agent's fanout topic

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  Steps to reproduce on devstack:
  1. Create the router:
   neutron router-create test
  2. Set gateway for the router:
   neutron router-gateway-set test public
  3. Delete the router:
   neutron router-delete test

  The notification about router deletion arrives into L3 agent twice:
  2015-01-21 10:53:48.401 DEBUG neutron.agent.l3.agent 
[req-ce68777f-da33-4381-a7ff-9b8c0eaac380 demo 
bdb38765a9394f2c962b952675f65073] Got router deleted notification for 
10bf431d-6c76-4b87-8db1-57e7fb69076
  8 from (pid=8310) router_deleted 
/opt/stack/neutron/neutron/agent/l3/agent.py:969

  2015-01-21 10:53:48.402 DEBUG neutron.agent.l3.agent 
[req-ce68777f-da33-4381-a7ff-9b8c0eaac380 demo 
bdb38765a9394f2c962b952675f65073] Got router deleted notification for 
10bf431d-6c76-4b87-8db1-57e7fb69076
  8 from (pid=8310) router_deleted 
/opt/stack/neutron/neutron/agent/l3/agent.py:969

  Notifications are processed sequentially, the first successfully removes the 
router, while the second results in warning:
  2015-01-21 10:53:50.957 WARNING neutron.agent.l3.agent 
[req-ce68777f-da33-4381-a7ff-9b8c0eaac380 demo 
bdb38765a9394f2c962b952675f65073] Info for router 
10bf431d-6c76-4b87-8db1-57e7fb690768 were not found. Skipping router removal

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406436] Re: ipv6 subnet address mode should not only check ipv6_address_mode but also ipv6_ra_mode

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1406436

Title:
  ipv6 subnet address mode should not only check ipv6_address_mode but
  also ipv6_ra_mode

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  Create an IPv6 subnet with ipv6-ra-mode is slaac or stateless, and
  ipv6-address-mode to none.  Neutron will also allocate an stateful
  address to instance, which is not the instance actually get.

  [root@node1 ~]# neutron subnet-show 5a2c86de-35f7-4d50-b490-9cf5f4edbe99
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | allocation_pools  | {start: 2001:200::10, end: 
2001:200:::::fffe} |
  | | {start: 2001:200::1, end: 
2001:200::e}|
  | cidr  | 2001:200::/64   
  |
  | dns_nameservers   | 
  |
  | enable_dhcp| True   
   |
  | gateway_ip  | 2001:200::f   
|
  | host_routes|
   |
  | id   | 5a2c86de-35f7-4d50-b490-9cf5f4edbe99 
 |
  | ip_version   | 6
 |
  | ipv6_address_mode | 
  |
  | ipv6_ra_mode   | dhcpv6-stateless   
   |
  | name   |
   |
  | network_id | 228afb74-bed4-4e66-9b5f-4bc56a37ee43   
   |
  | tenant_id| b7843e73eea547629f26afb764fc3bef 
 |
  
+---+---+

  [root@node1 ~]# nova list
  
+--+--+++-++
  | ID   | Name | Status | 
Task State | Power State | Networks   |
  
+--+--+++-++
  | 1ef77bb5-fe7e-4f17-8fac-99e79d5abdd2 | ubuntu_less  | ACTIVE | -  | 
Running | ipv6_gre=2001:200::12  |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1406436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395196] Re: AttributeError on OVS agent startup with DVR on

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1395196

Title:
  AttributeError on OVS agent startup with DVR on

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  aarrgh...

  http://logs.openstack.org/48/136248/7/check/check-tempest-dsvm-
  neutron-
  dvr/3dc6138/logs/screen-q-agt.txt.gz?level=TRACE#_2014-11-21_17_55_44_254

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1395196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402407] Re: IPv6 Router Advertisements are blocked in secgroups when using radvd based networks

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1402407

Title:
  IPv6 Router Advertisements are blocked in secgroups when using radvd
  based networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  Description of problem:
  ===
  Discovered in: https://bugzilla.redhat.com/show_bug.cgi?id=1173987

  I Created n radvd IPv6 subnet with:
  1. ipv6_ra_mode: dhcpv6-stateless
  2. ipv6_address_mode: dhcpv6-stateless

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2.1-2

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Create an IPv4 neutron network (might not be mandatory but this is how I 
did it):
     # neutron net-create internal_ipv4_a --shared

  2. Create an IPv4 subnet:
     # neutron subnet-create IPv4_net_id 192.168.1.0/24 --name 
internal_ipv4_a_subnet --ip-version 4

  3. Create an IPv6 neutron network:
     # neutron net-create tenant_a_radvd_stateless --shared 
--provider:network_type=gre --provider:segmentation_id=123

  4. Create an IPv6 subnet:
     # neutron subnet-create IPv6_net_id 2001:1234:1234::/64 --name 
internal_ipv6_subnet --ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode 
dhcpv6-stateless --dns-nameserver 2001:4860:4860:: --ip-version 6

  5. Create a neutron router:
     # neutron router-create router1

  6. Attach subnets to the router
     # neutron router-interface-add router_id ipv4_subnet
     # neutron router-interface-add router_id ipv6_subnet

  7. boot an instance with that network
     # nova boot tenant_a_instance_radvd_stateless --flavor m1.small --image 
image_id --key-name keypair --security-groups default --nic 
net-id=ipv4_net_id --nic net-id=ipv6_net_id

  Actual results:
  ===
  1. RAs reach the instance qbr but not to the instance tap device.
  2. Instance did not obtain IPv6 address.

  Expected results:
  =
  IPv6 Router Advertisements should reach the instance.

  Additional info:
  
  1. Compute node and L3 agent deployed on different servers.
  2. Communication between the nodes (RAs) done via GRE tunnels.
  3. This worked before openstack-neutron-2014.2-11
  4. Tested with RHEL7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1402407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394020] Re: Fix enable_metadata_network flag

2015-01-29 Thread Chuck Short
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
Milestone: None = 2014.2.2

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394020

Title:
  Fix enable_metadata_network flag

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  The following patch: 9569b2fe broke the desired functionality of
  the enable_metadata_network flag. The author of this patch was not
  aware that the enable_metadata_network flag was used to spin up
  ns-metadata-proxies for plugins that do not use the l3-agent (where
  this agent will spin up the metadata proxy).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290234] Re: do not use __builtin__ in Python3

2015-01-29 Thread Sean Dague
Until there is an eventlet story for python3, this kind of bug is
pointless. Removing from Nova.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290234

Title:
  do not use __builtin__ in Python3

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Openstack Database (Trove):
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  __builtin__ does not exist in Python 3, use six.moves.builtins
  instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268439] Re: range method is not same in py3.x and py2.x

2015-01-29 Thread Sean Dague
Agreed for the Nova case. No additional python3 fixes are welcomed until
there is a core eventlet and other library story.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268439

Title:
  range method is not same in py3.x and py2.x

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in Python client library for Ceilometer:
  Fix Committed
Status in Python client library for Neutron:
  Invalid
Status in Python client library for Swift:
  Fix Committed
Status in OpenStack Object Storage (Swift):
  In Progress

Bug description:
  in py3.x,range is xrange in py2.x.
  in py3.x, if you want get a list,you must use:
  list(range(value))

  I review the code, find that many codes use range for  loop, if used py3.x 
environment,
  it will occure error.
  so we must modify this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1268439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416105] [NEW] System Information panel state and status values are not translatable

2015-01-29 Thread Doug Fish
Public bug reported:

On the Admin-System-System Information panel on the Compute Services
and Block Storage Services tabs Status and State are not translatable.
Note that the Network Agents tab has both of these translated properly
and can likely serve as model.

To see the problem, the pseudo translation tool can be used:
./run_tests.sh --makemessages
./run_tests.sh --pseudo de
./run_tests.sh --compilemessages
./run_tests.sh --runserver [ip:port]

Login + Change the language to German

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416105

Title:
  System Information panel state and status values are not translatable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the Admin-System-System Information panel on the Compute Services
  and Block Storage Services tabs Status and State are not translatable.
  Note that the Network Agents tab has both of these translated properly
  and can likely serve as model.

  To see the problem, the pseudo translation tool can be used:
  ./run_tests.sh --makemessages
  ./run_tests.sh --pseudo de
  ./run_tests.sh --compilemessages
  ./run_tests.sh --runserver [ip:port]

  Login + Change the language to German

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357055] Re: Race to delete shared subnet in Tempest neutron full jobs

2015-01-29 Thread Joe Gordon
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357055

Title:
  Race to delete shared subnet in Tempest neutron full jobs

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Confirmed

Bug description:
  This seems to show up in several different tests, basically anything
  using neutron.  I noticed it here:

  http://logs.openstack.org/89/112889/1/gate/check-tempest-dsvm-neutron-
  full/21fcf50/console.html#_2014-08-14_17_03_10_330

  That's on a stable/icehouse change, but logstash shows this on master
  mostly.

  I see this in the neutron server logs:

  http://logs.openstack.org/89/112889/1/gate/check-tempest-dsvm-neutron-
  full/21fcf50/logs/screen-q-svc.txt.gz#_2014-08-14_16_45_02_101

  This query shows 82 hits in 10 days:

  message:delete failed \(client error\)\: Unable to complete operation
  on subnet AND message:One or more ports have an IP allocation from
  this subnet AND tags:screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZGVsZXRlIGZhaWxlZCBcXChjbGllbnQgZXJyb3JcXClcXDogVW5hYmxlIHRvIGNvbXBsZXRlIG9wZXJhdGlvbiBvbiBzdWJuZXRcIiBBTkQgbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIHRhZ3M6XCJzY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wNy0zMVQxOTo0Mzo0NSswMDowMCIsInRvIjoiMjAxNC0wOC0xNFQxOTo0Mzo0NSswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4MDQ1NDY1OTU2fQ==

  Logstash doesn't show this in the gate queue but it does show up in
  the uncategorized bugs list which is in the gate queue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416107] [NEW] on defaults panel several quota names are not translatable

2015-01-29 Thread Doug Fish
Public bug reported:

On Admin-System-Defaults most values are translatable, but several
values are not: Server Group Members, Server Groups, Backup Gigabytes,
and Backups.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: i18n

** Tags added: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416107

Title:
  on defaults panel several quota names are not translatable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Admin-System-Defaults most values are translatable, but several
  values are not: Server Group Members, Server Groups, Backup Gigabytes,
  and Backups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416132] [NEW] _get_instance_disk_info fails to read files from NFS due permissions

2015-01-29 Thread Eric Harney
Public bug reported:

LibvirtDriver's _get_instance_disk_info calls
libvirt_utils.get_disk_backing_file() if processing a qcow2 backing
file.  If this is a file belonging to an attached and NFS-hosted Cinder
volume, it may be owned by qemu:qemu and  therefore not readable as the
nova user.

My proposed solution is to run the images.qemu_img_info() call as root
in this case.

Note that this requires a change to grenade to upgrade the rootwrap
configuration for gating to pass.

** Affects: nova
 Importance: Undecided
 Assignee: Eric Harney (eharney)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416132

Title:
  _get_instance_disk_info fails to read files from NFS due permissions

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  LibvirtDriver's _get_instance_disk_info calls
  libvirt_utils.get_disk_backing_file() if processing a qcow2 backing
  file.  If this is a file belonging to an attached and NFS-hosted
  Cinder volume, it may be owned by qemu:qemu and  therefore not
  readable as the nova user.

  My proposed solution is to run the images.qemu_img_info() call as root
  in this case.

  Note that this requires a change to grenade to upgrade the rootwrap
  configuration for gating to pass.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416135] [NEW] Unit tests would fail if the ipaddress (v4, v6) assignment order is changed

2015-01-29 Thread Dipa
Public bug reported:


While searching for assigned address the unit test cases require the ipv4 
address to be assigned first. The unit test cases should allow for either ipv4 
or ipv6 to be assigned in any order. 

Cisco N1Kv as well DB plugin unit test fail if ipv6 is assigned first.

** Affects: neutron
 Importance: Undecided
 Assignee: Dipa (dthakkar)
 Status: In Progress


** Tags: cisco db unittest

** Changed in: neutron
 Assignee: (unassigned) = Dipa (dthakkar)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416135

Title:
  Unit tests would fail if the ipaddress (v4,v6) assignment order is
  changed

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
  While searching for assigned address the unit test cases require the ipv4 
address to be assigned first. The unit test cases should allow for either ipv4 
or ipv6 to be assigned in any order. 

  Cisco N1Kv as well DB plugin unit test fail if ipv6 is assigned first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416144] [NEW] Port status and state are not consistently translated

2015-01-29 Thread Doug Fish
Public bug reported:

On the Admin-System-Networks-[Detail] panel, in the Ports section the
Status column is not translated (though the Admin State Column is).
Clicking detail on that table, on the Port Details panel neither the
Status, nor the Admin state are translated.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416144

Title:
  Port status and state are not consistently translated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the Admin-System-Networks-[Detail] panel, in the Ports section
  the Status column is not translated (though the Admin State Column
  is).  Clicking detail on that table, on the Port Details panel neither
  the Status, nor the Admin state are translated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416110] [NEW] Project Enabled value is not translated

2015-01-29 Thread Doug Fish
Public bug reported:

On the Identity-Projects panel the values in the Enabled column are not
translated - they are shown as hardcoded True or False.

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: New


** Tags: i18n

** Tags added: i18n

** Changed in: horizon
 Assignee: (unassigned) = Doug Fish (drfish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416110

Title:
  Project Enabled value is not translated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the Identity-Projects panel the values in the Enabled column are
  not translated - they are shown as hardcoded True or False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416103] [NEW] Remove redundant statement from l3agentscheduler

2015-01-29 Thread sajuptpm
Public bug reported:

Following statement is redundant in class
L3AgentsHostingRouterController of
neutron/extensions/l3agentscheduler.py.

plugin =
manager.NeutronManager.get_service_plugins().get(service_constants.L3_ROUTER_NAT)

we can use plugin = self.get_plugin() instead.

** Affects: neutron
 Importance: Undecided
 Assignee: sajuptpm (sajuptpm)
 Status: New

** Attachment added: Screen-shot of the code
   
https://bugs.launchpad.net/bugs/1416103/+attachment/4308259/+files/Screenshot%20from%202015-01-30%2002%3A08%3A23.png

** Changed in: neutron
 Assignee: (unassigned) = sajuptpm (sajuptpm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416103

Title:
  Remove redundant statement from l3agentscheduler

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Following statement is redundant in class
  L3AgentsHostingRouterController of
  neutron/extensions/l3agentscheduler.py.

  plugin =
  
manager.NeutronManager.get_service_plugins().get(service_constants.L3_ROUTER_NAT)

  we can use plugin = self.get_plugin() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415807] Re: Instance Count does does not update Project Limits when launching new VM

2015-01-29 Thread Gary W. Smith
This works properly on master (Kilo). Since this is not a
critical/security defect, it is too late for Icehouse fixes

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415807

Title:
  Instance Count does does not update Project Limits when launching new
  VM

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When you are launching new VM and you change Flavour, then Project
  limits (the box on right size) updates on the fly. But if you update
  Instance Count (which should affect Project Limits) then Project
  Limits is not updated at all.

  Version:
  OpenStack Icehouse, RDO on RHEL7

  Steps to Reproduce:
  1. Opend Dashboard - Images - Click Launch button on some instance
  2.  Select flavour x1.large
  3. Notice that green bars in Project Limits grow E.g. vcpu grow by 8.
  4. Increase instance count to 4

  Actual result:
green bars in project limits does not change

  Expected result:
 green bars in project limits should grow. E.g. vcpu shoudl grow from 8 
to 32

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416181] [NEW] 'router_gateway' port status is always DOWN

2015-01-29 Thread shihanzhang
Public bug reported:

If br-ex does not set 'bridge-id', the 'router_gateway' status will be
always DOWN, the reason is that:

def setup_ancillary_bridges(self, integ_br, tun_br):
'''Setup ancillary bridges - for example br-ex.'''
ovs = ovs_lib.BaseOVS(self.root_helper)
ovs_bridges = set(ovs.get_bridges())
# Remove all known bridges
ovs_bridges.remove(integ_br)
if self.enable_tunneling:
ovs_bridges.remove(tun_br)
br_names = [self.phys_brs[physical_network].br_name for
physical_network in self.phys_brs]
ovs_bridges.difference_update(br_names)
# Filter list of bridges to those that have external
# bridge-id's configured
br_names = []
for bridge in ovs_bridges:
bridge_id = ovs.get_bridge_external_bridge_id(bridge)
if bridge_id != bridge:
br_names.append(bridge)

if br-ex does not set 'bridge-id', ovs agent will not add it to
ancillary_bridges, so I think if br-ex does not set 'bridge-id', it just
report a warning message is ok!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416181

Title:
  'router_gateway' port status is always DOWN

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If br-ex does not set 'bridge-id', the 'router_gateway' status will be
  always DOWN, the reason is that:

  def setup_ancillary_bridges(self, integ_br, tun_br):
  '''Setup ancillary bridges - for example br-ex.'''
  ovs = ovs_lib.BaseOVS(self.root_helper)
  ovs_bridges = set(ovs.get_bridges())
  # Remove all known bridges
  ovs_bridges.remove(integ_br)
  if self.enable_tunneling:
  ovs_bridges.remove(tun_br)
  br_names = [self.phys_brs[physical_network].br_name for
  physical_network in self.phys_brs]
  ovs_bridges.difference_update(br_names)
  # Filter list of bridges to those that have external
  # bridge-id's configured
  br_names = []
  for bridge in ovs_bridges:
  bridge_id = ovs.get_bridge_external_bridge_id(bridge)
  if bridge_id != bridge:
  br_names.append(bridge)

  if br-ex does not set 'bridge-id', ovs agent will not add it to
  ancillary_bridges, so I think if br-ex does not set 'bridge-id', it
  just report a warning message is ok!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415271] Re: user_enabled_attribute string support is poor

2015-01-29 Thread Steve Martinelli
** Also affects: keystone/juno
   Importance: Undecided
   Status: New

** Changed in: keystone/juno
   Importance: Undecided = Medium

** Changed in: keystone/juno
 Assignee: (unassigned) = Steve Martinelli (stevemar)

** Changed in: keystone
Milestone: None = kilo-2

** Changed in: keystone/juno
Milestone: None = 2014.2.2

** Changed in: keystone/juno
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1415271

Title:
  user_enabled_attribute string support is poor

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone juno series:
  In Progress

Bug description:
  When attempting to authenticate with our ldap, we were running into
  trouble getting the right value to show up for the user's enabled
  attribute.

  The result from ldap was:
  [('uid=123456789,c=us,ou=our_ldap,o=ibm.com', {'mail': ['sh...@acme.com'], 
'passwordisexpired': ['false'], 'uid': ['123456789']})]

  which is turned into:
  [(u'uid=123456789,c=us,ou=our_ldap,o=ibm.com', {'mail': [u'sh...@acme.com'], 
'passwordisexpired': [u'false'], 'uid': [123456789]})]

  the _ldap_res_to_model  function in ldap/core.py seems to be OK, but
  the same one at the identity backend for ldap seems to have a few
  bugs:

  the object before:
  {'email': u'sh...@acme.com', 'enabled': u'false', 'id': 123456789, 'name': 
u'sh...@acme.com'} 

  the object after:
  {'dn': u'uid=123456789,c=us,ou=our_ldap,o=ibm.com', 'email': 
u'sh...@acme.com', 'enabled': False, 'id': 123456789, 'name': 
u'sh...@acme.com'} 

  Note that the enabled field is still False, just a boolean now instead
  of string.

  Looks like at:
  
https://github.com/openstack/keystone/blob/stable/juno/keystone/identity/backends/ldap.py#L223-L227

  The check for if type(str) is insufficient, and calling lower, without
  the parentheses is pointless.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1415271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392735] Re: Project Limits don't refresh while selecting Flavor

2015-01-29 Thread Kieran Spear
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New = Confirmed

** Changed in: horizon/juno
 Assignee: (unassigned) = Kieran Spear (kspear)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1392735

Title:
  Project Limits don't refresh while selecting Flavor

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  In Progress

Bug description:
  To recreate:
Project - Compute - Instances - Launch instance
Change the flavor using the up/down arrows
Observe how the project limits do not update until the user tabs out of the 
field

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1392735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402917] Re: Error: You are not allowed to delete volume

2015-01-29 Thread Gary W. Smith
This can currently be accomplished in horizon by updating the volume
status to 'error' and then deleting the volume. Cinder does not support
deleting a volume whose status is in-use (even if that status is wrong),
and neither should horizon.

** Tags added: volume

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1402917

Title:
  Error: You are not allowed to delete volume

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Version: stable/juno

  Proposal:
  It should be allowed to delete the volume which is attached to an 
already-deleted VM.

  This scenario can occur when the associated VM is deleted but the
  auto-triggered volume-detaching process is failed accidentally. If so,
  it results in the case where the volume is attached to an already-
  deleted VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1402917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416248] [NEW] 403 page displays trans template tag

2015-01-29 Thread Alex Chan
Public bug reported:

When the 403 page is displayed, the trans template tag is not
evaluated and instead is displayed with the HTML.  Instead, this should
be evaluated as part of the template and not displayed to the user.

** Affects: horizon
 Importance: Undecided
 Assignee: Alex Chan (alexc2-3)
 Status: In Progress


** Tags: low-hanging-fruit

** Changed in: horizon
 Assignee: (unassigned) = Alex Chan (alexc2-3)

** Changed in: horizon
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416248

Title:
  403 page displays trans template tag

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When the 403 page is displayed, the trans template tag is not
  evaluated and instead is displayed with the HTML.  Instead, this
  should be evaluated as part of the template and not displayed to the
  user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414967] Re: test_slaac_from_os fails on nova v2.1 API

2015-01-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/151117
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=fc5e0c75fa7c20c62974e2fd6e66087d67707928
Submitter: Jenkins
Branch:master

commit fc5e0c75fa7c20c62974e2fd6e66087d67707928
Author: ghanshyam ghanshyam.m...@nectechnologies.in
Date:   Thu Jan 29 15:09:40 2015 +0900

Fix create server request in test_network_v6

As Nova V2.1 is much strict about API inputs, Security group name
only is accepted on create server request as a security group
parameter.

test_network_v6 passes complete security group (name, id, rules etc)
in create server request and fails for Nova V2.1.

Closes-Bug: #1414967

Change-Id: I45adde0c7c4134ea087427f23f2e45d7ec9c88e7


** Changed in: tempest
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414967

Title:
  test_slaac_from_os fails on nova v2.1 API

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  Now we are testing nova v2.1 API on the gate *without* neutron.
  If enabling both v2.1 API and neutron, test_slaac_from_os and 
test_dhcp6_stateless_from_os will fail.

  http://logs.openstack.org/03/139903/4/check/check-tempest-dsvm-
  neutron-full/3697e41/logs/testr_results.html.gz

  Traceback (most recent call last):
File tempest/test.py, line 112, in wrapper
  return f(self, *func_args, **func_kwargs)
File tempest/scenario/test_network_v6.py, line 151, in 
test_dhcp6_stateless_from_os
  self._prepare_and_test(address6_mode='dhcpv6-stateless')
File tempest/scenario/test_network_v6.py, line 118, in _prepare_and_test
  ssh1, srv1 = self.prepare_server()
File tempest/scenario/test_network_v6.py, line 107, in prepare_server
  srv = self.create_server(create_kwargs=create_kwargs)
File tempest/scenario/manager.py, line 190, in create_server
  **create_kwargs)
File tempest/services/compute/json/servers_client.py, line 87, in 
create_server
  resp, body = self.post('servers', post_body)
File 
/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py, 
line 168, in post
  return self.request('POST', url, extra_headers, headers, body)
File tempest/common/service_client.py, line 67, in request
  raise exceptions.BadRequest(ex)
  BadRequest: Bad request
  Details: Bad request
  Details: {u'code': 400, u'message': uInvalid input for field/attribute 0. 
Value: {u'tenant_id': u'9d3ba5d5126a4762a34ed937affcac32', 
u'security_group_rules': [{u'remote_group_id': None, u'direction': u'egress', 
u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, 
u'security_group_id': u'64bb364b-ce81-419a-9dfc-42ceff3d38df', u'tenant_id': 
u'9d3ba5d5126a4762a34ed937affcac32', u'port_range_min': None, 
u'remote_ip_prefix': None, u'id': u'49684079-30d5-40af-a3d7-50e8d36f7836'}, 
{u'remote_group_id': None, u'direction': u'egress', u'protocol': None, 
u'ethertype': u'IPv6', u'port_range_max': None, u'security_group_id': 
u'64bb364b-ce81-419a-9dfc-42ceff3d38df', u'tenant_id': 
u'9d3ba5d5126a4762a34ed937affcac32', u'port_range_min': None, 
u'remote_ip_prefix': None, u'id': u'04cd8976-0994-4411-b407-0d093f4376f7'}], 
u'id': u'64bb364b-ce81-419a-9dfc-42ceff3d38df', u'name': 
u'secgroup-smoke-1781250592', u'description': u'secgroup-smoke-1781250592 
description'}. Additional properties
  are not allowed (u'tenant_id', u'id', u'security_group_rules', u'description' 
were unexpected)}
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415497] Re: Manual creation of Volume Type Extra Spec raises error

2015-01-29 Thread Gary W. Smith
I believe the proper location is https://bugs.launchpad.net/mos .
Closing this bug.

** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415497

Title:
  Manual creation of Volume Type Extra Spec raises error

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  A. Manual creation of Volume Type Extra Spec raises error

  Steps:

  1) Login to Horizon as admin
  2) Navigate to Admin-System-Volumes
  3) Create Volume Type
  4) Click View Extra Specs button
  5) Click Create
  6) Fill Key and Value fields, Click Create
  Have error page like that:
  Not Found
  The requested URL 
/admin/volumes/volume_types/6b82f24a-8287-4474-9216-cd6976a85a67/extras/ was 
not found on this server.
  (see attached screenshot )

  7) Navigate back to Volume Type Extra Specs page, see that Extra Spec was 
created.
  Expected not to get any errors.

  Browser Console:

  Remote Address:172.16.0.2:80
  Request 
URL:http://172.16.0.2/admin/volumes/volume_types/6b82f24a-8287-4474-9216-cd6976a85a67/extras/
  Request Method:GET
  Status Code:404 Not Found

  !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
  htmlhead
  title404 Not Found/title
  /headbody
  h1Not Found/h1
  pThe requested URL 
/admin/volumes/volume_types/6b82f24a-8287-4474-9216-cd6976a85a67/extras/ was 
not found on this server./p
  /body/html

  Environments:

  A) Remote deployment
  Fuel 6.0 Juno on Ubuntu 12.04.4, 2014.2-6.0
  5 nodes: 3 Controller+CephOSD; 2 Computes
  Tempest deployed with MOS-Scale

  B) Local on VirtualBox
  Fuel 6.1 build: fuel-6.1-81-2015-01-27_11-30-12.iso on VirtualBox
  Ubuntu 12.04.4, 2014.2-6.1:
  3 nodes: 1 Controller, 1 Compute, 1 Ceph-OSD
  Tempest deployed with MOS-Scale

  Issue appears to be related to this one:
  https://bugs.launchpad.net/mos/+bug/1415501

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409719] Re: Create log tab under Volume Details: Volume

2015-01-29 Thread Gary W. Smith
Closing since this information is not available from cinder

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1409719

Title:
  Create log tab under Volume Details: Volume

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  It would be nice to have a logs tab within Volume Details: Volume when
  creating a volume in Horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1409719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416278] [NEW] Ha router should not schedule to 'dvr_snat' agent

2015-01-29 Thread shihanzhang
Public bug reported:


Ha router should not schedule to 'dvr_snat' agent, but in 
'get_l3_agent_candidates', it allow a Ha router to 'dvr-snat' agent,

def get_l3_agent_candidates(self, context, sync_router, l3_agents):
Get the valid l3 agents for the router from a list of l3_agents.
candidates = []
for l3_agent in l3_agents:
if not l3_agent.admin_state_up:
continue
agent_conf = self.get_configuration_dict(l3_agent)
router_id = agent_conf.get('router_id', None)
use_namespaces = agent_conf.get('use_namespaces', True)
handle_internal_only_routers = agent_conf.get(
'handle_internal_only_routers', True)
gateway_external_network_id = agent_conf.get(
'gateway_external_network_id', None)
agent_mode = agent_conf.get('agent_mode', 'legacy')
if not use_namespaces and router_id != sync_router['id']:
continue
ex_net_id = (sync_router['external_gateway_info'] or {}).get(
'network_id')
if ((not ex_net_id and not handle_internal_only_routers) or
(ex_net_id and gateway_external_network_id and
 ex_net_id != gateway_external_network_id)):
continue
is_router_distributed = sync_router.get('distributed', False)
if agent_mode in ('legacy', 'dvr_snat') and (
not is_router_distributed):
candidates.append(l3_agent)

so  'if agent_mode in ('legacy', 'dvr_snat') ' should be 'if agent_mode
== 'legacy''

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416278

Title:
  Ha router should not schedule to 'dvr_snat' agent

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
  Ha router should not schedule to 'dvr_snat' agent, but in 
'get_l3_agent_candidates', it allow a Ha router to 'dvr-snat' agent,

  def get_l3_agent_candidates(self, context, sync_router, l3_agents):
  Get the valid l3 agents for the router from a list of l3_agents.
  candidates = []
  for l3_agent in l3_agents:
  if not l3_agent.admin_state_up:
  continue
  agent_conf = self.get_configuration_dict(l3_agent)
  router_id = agent_conf.get('router_id', None)
  use_namespaces = agent_conf.get('use_namespaces', True)
  handle_internal_only_routers = agent_conf.get(
  'handle_internal_only_routers', True)
  gateway_external_network_id = agent_conf.get(
  'gateway_external_network_id', None)
  agent_mode = agent_conf.get('agent_mode', 'legacy')
  if not use_namespaces and router_id != sync_router['id']:
  continue
  ex_net_id = (sync_router['external_gateway_info'] or {}).get(
  'network_id')
  if ((not ex_net_id and not handle_internal_only_routers) or
  (ex_net_id and gateway_external_network_id and
   ex_net_id != gateway_external_network_id)):
  continue
  is_router_distributed = sync_router.get('distributed', False)
  if agent_mode in ('legacy', 'dvr_snat') and (
  not is_router_distributed):
  candidates.append(l3_agent)

  so  'if agent_mode in ('legacy', 'dvr_snat') ' should be 'if
  agent_mode  == 'legacy''

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416149] [NEW] Horizon doesn't show a useful message when a service user is locked out

2015-01-29 Thread Doug Fish
Public bug reported:

Scenario:
Service users are locked out in the underlying user repository, like ldap.
Login to horizon with a normal user.
Login works fine
You are shown a popup message that the user is not authorized.

The popup message does not contain enough detail to know which user and
what the real problem is.  In addition, the dashboard logs only show
unauthorized as well, and not root cause of the problem.  This makes
debugging the problem difficult.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416149

Title:
  Horizon doesn't show a useful message when a service user is locked
  out

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Scenario:
  Service users are locked out in the underlying user repository, like ldap.
  Login to horizon with a normal user.
  Login works fine
  You are shown a popup message that the user is not authorized.

  The popup message does not contain enough detail to know which user
  and what the real problem is.  In addition, the dashboard logs only
  show unauthorized as well, and not root cause of the problem.  This
  makes debugging the problem difficult.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415807] Re: Instance Count does does not update Project Limits when launching new VM

2015-01-29 Thread Kieran Spear
*** This bug is a duplicate of bug 1369621 ***
https://bugs.launchpad.net/bugs/1369621

** This bug has been marked a duplicate of bug 1369621
   Project limits don't update when using the input selector to change instance 
count

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415807

Title:
  Instance Count does does not update Project Limits when launching new
  VM

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When you are launching new VM and you change Flavour, then Project
  limits (the box on right size) updates on the fly. But if you update
  Instance Count (which should affect Project Limits) then Project
  Limits is not updated at all.

  Version:
  OpenStack Icehouse, RDO on RHEL7

  Steps to Reproduce:
  1. Opend Dashboard - Images - Click Launch button on some instance
  2.  Select flavour x1.large
  3. Notice that green bars in Project Limits grow E.g. vcpu grow by 8.
  4. Increase instance count to 4

  Actual result:
green bars in project limits does not change

  Expected result:
 green bars in project limits should grow. E.g. vcpu shoudl grow from 8 
to 32

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369621] Re: Project limits don't update when using the input selector to change instance count

2015-01-29 Thread Kieran Spear
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Status: New = Confirmed

** Changed in: horizon/juno
 Assignee: (unassigned) = Kieran Spear (kspear)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1369621

Title:
  Project limits don't update when using the input selector to change
  instance count

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  In Progress

Bug description:
  To recreate:
Project - Compute - Instances - Launch instance
Change the instance count using the up/down arrows
Observe how the project limits do not update

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1369621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416269] [NEW] boot vm failed with --block-device set as attach volume failed during boot

2015-01-29 Thread Jerry Cai
Public bug reported:

When attach a existing vm during booting vm by following cmd:
nova boot --flavor small --image c7e8738b-c2c6-4365-a305-040bfbd1b514 --nic 
net-id=abfe3157-d23c-4d15-a7ff-80429a7d9b27 --block-device 
source=volume,dest=volume,bootindex=1,shutdown=remove,id=ca383135-d619-43c2-8826-95ae4d475581
 test11

It failed in block device mapping phase, error from nova is:
2015-01-30 01:59:14.030 28957 ERROR nova.compute.manager [-] [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] Instance failed block device setup
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] Traceback (most recent call last):
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1856, in 
_prep_block_device
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] do_check_attach=do_check_attach) +
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 407, in 
attach_block_devices
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] map(_log_and_attach, 
block_device_mapping)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 405, in 
_log_and_attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] bdm.attach(*attach_args, 
**attach_kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 48, in 
wrapped
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] ret_val = method(obj, context, *args, 
**kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 272, in 
attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] self['mount_device'], mode=mode)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/nova/volume/cinder.py, line 213, in wrapper
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] res = method(self, ctx, volume_id, 
*args, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/nova/volume/cinder.py, line 359, in attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] mountpoint, mode=mode)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py, line 326, in 
attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] 'mode': mode})
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py, line 311, in 
_action
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] return self.api.client.post(url, 
body=body)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 91, in post
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] return self._cs_request(url, 'POST', 
**kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 85, in 
_cs_request
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] return self.request(url, method, 
**kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 80, in request
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] return super(SessionClient, 
self).request(*args, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File