[Yahoo-eng-team] [Bug 1507866] [NEW] Scheduling of Firewall rules

2015-10-19 Thread Reedip
Public bug reported:

(A)Summary : Firewall rules in Openstack does not support scheduling
(B)Further information :
(B.1)High level description: Currently Openstack firewall rules do not allow 
scheduling. When a router is associated with a firewall, the rules making the 
firewall are active for the whole duration till the rule is a part of the 
firewall.
However, users may require a scheduled action in the firewall, so that a single 
rule can act upon the firewall packets for a specific time period.After the 
time period expires, the rule can change its behavior on the same packets.
(B.2)Pre-conditions: The following requirement does not have an explicit 
pre-conditon.
Note:
- This is applicable for all tenants
(B.3)Step-by-step reproduction steps: NA, as this feature does not currently 
exist in Openstack. 
(B.4)Expected output: User should be able to create a Firewall rule which can 
be scheduled, to provide extended support to the user.
(B.5)Actual output: Such a facility in the firewall rule is not available.
(B.6)Version:
OpenStack version (Specific stable branch, or git hash if from 
trunk): Tag ID : c1310f32fbb6dfa958bb31152ee5b492b177c6cb
Linux distro, kernel.: Ubuntu 14.04
DevStack or other _deployment_ mechanism?
Environment: Neutron with Firewall Extensions, on a single node machine.
 However, the above requirement is independent 
of the environment.
(C)Perceived severity: Medium

** Affects: neutron
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New


** Tags: fwaas

** Changed in: neutron
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507866

Title:
  Scheduling of Firewall rules

Status in neutron:
  New

Bug description:
  (A)Summary : Firewall rules in Openstack does not support scheduling
  (B)Further information :
  (B.1)High level description: Currently Openstack firewall rules do not allow 
scheduling. When a router is associated with a firewall, the rules making the 
firewall are active for the whole duration till the rule is a part of the 
firewall.
  However, users may require a scheduled action in the firewall, so that a 
single rule can act upon the firewall packets for a specific time period.After 
the time period expires, the rule can change its behavior on the same packets.
  (B.2)Pre-conditions: The following requirement does not have an explicit 
pre-conditon.
  Note:
  - This is applicable for all tenants
  (B.3)Step-by-step reproduction steps: NA, as this feature does not currently 
exist in Openstack. 
  (B.4)Expected output: User should be able to create a Firewall rule which can 
be scheduled, to provide extended support to the user.
  (B.5)Actual output: Such a facility in the firewall rule is not available.
  (B.6)Version:
  OpenStack version (Specific stable branch, or git hash if from 
trunk): Tag ID : c1310f32fbb6dfa958bb31152ee5b492b177c6cb
  Linux distro, kernel.: Ubuntu 14.04
  DevStack or other _deployment_ mechanism?
  Environment: Neutron with Firewall Extensions, on a single node 
machine.
   However, the above requirement is 
independent of the environment.
  (C)Perceived severity: Medium

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464377] Re: Keystone v2.0 api accepts tokens deleted with v3 api

2015-10-19 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1464377

Title:
  Keystone v2.0 api accepts tokens deleted with v3 api

Status in Keystone:
  Expired

Bug description:
  Keystone tokens that are deleted using the v3 api are still accepted by
  the v2 api. Steps to reproduce:

  1. Request a scoped token as a member of a tenant.
  2. Delete it using DELETE /v3/auth/tokens
  3. Request the tenants you can access with GET v2.0/tenants
  4. The token is accepted and keystone returns the list of tenants

  The token was a PKI token. Admin tokens appear to be deleted correctly.
  This could be a problem if a user's access needs to be revoked but they
  are still able to access v2 functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1464377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414527] Re: The multipath device descriptors remove failed when the volume has partition

2015-10-19 Thread weiweigu@zte
** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414527

Title:
  The multipath device descriptors remove failed when the volume has
  partition

Status in OpenStack Compute (nova):
  New

Bug description:
  
  tested environment:
  iscsi_use_multipath=True and iscsI volume backed is FUJITSU.
  An instance boots form a volume which has a image with partition.

  After terminating the instance,I found the multipath device 
"/dev/mapper/mpathq" deleted faild.  
  And the sd* devices have been deleted success.

  [root@opencos170 /(keystone_admin)]# multipath -l /dev/mapper/mpathq
  mpathq (360e00d28002800e2000f) dm-2 
  size=201G features='0' hwhandler='0' wp=rw
  [root@opencos170 /(keystone_admin)]#
   
  Can't delete the multipath device to use command "multipath -f 
/dev/mapper/mpathq". 
  [root@opencos170 /(keystone_admin)]# multipath -f /dev/mapper/mpathq
  Jan 26 10:40:37 | mpathqp3: map in use
  Jan 26 10:40:37 | failed to remove multipath map /dev/mapper/mpathq
  [root@opencos170 /(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414865] Re: Lost the sd* devices which belong to the multipath device.

2015-10-19 Thread weiweigu@zte
** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414865

Title:
  Lost the sd* devices which belong to the multipath device.

Status in OpenStack Compute (nova):
  New

Bug description:
  tested environment:
  iscsi_use_multipath=True and iscsI volume backed is ZTE KS3200.
  The nova compute note connects to KS3200 with four ports. And launch an 
instance which boots form a volume. 

  I see there are four sd* devices belong to the multipath device 
"/dev/mapper/mpathc".
  [root@opencos170 ~]# multipath -ll
  mpathc (360e00d28002800e2000c) dm-3 FUJITSU ,ETERNUS_DXL 
  size=1.0T features='0' hwhandler='0' wp=rw
  |-+- policy='round-robin 0' prio=50 status=active
  | |- 25:0:0:1 sdf 8:80  active ready  running
  | `- 24:0:0:1 sdi 8:128 active ready  running
  `-+- policy='round-robin 0' prio=10 status=enabled
|- 22:0:0:1 sdh 8:112 active ready  running
`- 23:0:0:1 sde 8:64  active ready  running

  I disconnect one port of KS3200. And attach a volume to the instance.
  Then I see the sdi device lost.  There are only three sd* devices.
  [root@opencos170 ~]# multipath -ll
  mpathc (360e00d28002800e2000c) dm-3 FUJITSU ,ETERNUS_DXL 
  size=1.0T features='0' hwhandler='0' wp=rw
  |-+- policy='round-robin 0' prio=50 status=active
  | |- 25:0:0:1 sdf 8:80  active ready  running
  `-+- policy='round-robin 0' prio=10 status=enabled
|- 22:0:0:1 sdh 8:112 active ready  running
`- 23:0:0:1 sde 8:64  active ready  running

  I connect the KS3200 port,but the sdi device never appear.

  After a lot of repetition of failure,I found the reason is the command 
"multipath -r".
  If the link between the compute note and the zte KS3200 is broken, the sd* 
device lost after execute the command "multipath -r".
  If the link is ok, the sd* device appear after execute the command "multipath 
-r".

  Is the command "multipath -r" necessary when attaching or detaching
  volume. Whether can be removed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507846] [NEW] Filtering ICMP packet based on ICMP code

2015-10-19 Thread Reedip
Public bug reported:

Summary : Support for filtering based on ICMP codes is missing in Openstack 
firewall.
Further information :
High level description: Currently Openstack firewall rules allow 
filtering of ICMP packets.However filtering is done for all ICMP packets. There 
can be a possible improvement in the Firewall rules, by introducing filetration 
of ICMP packets based on the ICMP packet type/code.
There are various possible ICMP packet types ( for example, Packet type 8 
corresponds to ICMP Echo while Packet type 0 is an ICMP Echo Response). It is 
possible to provide a more channeled functionality to the user by providing the 
support for filteration based on ICMP packets.

Pre-conditions:  As this is more of a feature improvement than an all 
out bug,there are no specific precondition. However, the following requirements 
can be mapped to the pre-condition of the bug:
   * User wants to create a firewall which allows incoming ICMP pings, but 
blocks ICMP ping from the current subnet.
   [ Note ]:
   (a) This is applicable to all tenants
   (b) This feature assumes the requirement that user wants a Node to 
accept a ping request and respond to it, but not to send a request out.

Step-by-step reproduction steps:
   * User creates a firewall rule with  ICMP protocol with specific 
source/destination IP.
   * User creates a firewall rule with specific ports.
   * User cannot proceed with the rule which allows his requirement to be 
fulfilled. ( allows incoming ICMP ping requests, but blocks outgoing ICMP ping 
requests)

Expected output: User should be able to create a Firewall rule,
which allows the userś requirement to be fulfilled.

Actual output: Such a facility in the firewall rule is not
available.

Version:
OpenStack version (Specific stable branch, or git hash if from 
trunk): Tag ID : c1310f32fbb6dfa958bb31152ee5b492b177c6cb
Linux distro, kernel.: Ubuntu 14.04
DevStack or other _deployment_ mechanism?
Environment: Neutron with Firewall Extensions, on a single node machine.
 However, the above requirement is independent 
of the environment.
Perceived severity: Medium/Low depending on the importance of Deep 
Packet Inspection.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Description changed:

- Summary (Bug title): Support for filtering based on ICMP codes is missing 
in Openstack firewall.
- Further information (Bug description):
- High level description: Currently Openstack firewall rules allow 
filtering of ICMP packets.However filtering is done for all ICMP packets. There 
can be a possible improvement in the Firewall rules, by introducing filetration 
of ICMP packets based on the ICMP packet type/code.
+ Summary : Support for filtering based on ICMP codes is missing in Openstack 
firewall.
+ Further information :
+ High level description: Currently Openstack firewall rules allow 
filtering of ICMP packets.However filtering is done for all ICMP packets. There 
can be a possible improvement in the Firewall rules, by introducing filetration 
of ICMP packets based on the ICMP packet type/code.
  There are various possible ICMP packet types ( for example, Packet type 8 
corresponds to ICMP Echo while Packet type 0 is an ICMP Echo Response). It is 
possible to provide a more channeled functionality to the user by providing the 
support for filteration based on ICMP packets.
  
- Pre-conditions:  As this is more of a feature improvement than an all 
out bug,there are no specific precondition. However, the following requirements 
can be mapped to the pre-condition of the bug:
-* User wants to create a firewall which allows incoming ICMP pings, 
but blocks ICMP ping from the current subnet.
-[ Note ]:
-(a) This is applicable to all tenants
-(b) This feature assumes the requirement that user wants a Node to 
accept a ping request and respond to it, but not to send a request out.
- 
- Step-by-step reproduction steps: 
-* User creates a firewall rule with  ICMP protocol with specific 
source/destination IP.
-* User creates a firewall rule with specific ports.
-* User cannot proceed with the rule which allows his requirement to be 
fulfilled. ( allows incoming ICMP ping requests, but blocks outgoing ICMP ping 
requests)
+ Pre-conditions:  As this is more of a feature improvement than an all 
out bug,there are no specific precondition. However, the following requirements 
can be mapped to the pre-condition of the bug:
+    * User wants to create a firewall which allows incoming ICMP pings, 
but blocks ICMP ping from the current subnet.
+    [ Note ]:
+    (a) This is applicable to all tenants
+    (b) This feature assumes the requirement that user wants a Node to 
accept a ping

[Yahoo-eng-team] [Bug 1507834] [NEW] Liberty: vncserver_listen doesn't bind anymore in a dual-stack setup

2015-10-19 Thread Thiago Martins
Public bug reported:

Guys,

The vncserver_listen doesn't work anymore in a dual-stack setup, error:

2015-10-19 21:22:03.180 2737 ERROR nova.compute.manager [instance:
9fc58e29-41a0-44a8-aa31-1773d5cdea13] 2015-10-20T01:22:02.895921Z qemu-
system-x86_64: -vnc [::]:0: Failed to start VNC server on `(null)':
address resolution failed for [::]:5900: Name or service not known

Line in /etc/nova/nova.conf, group [vnc]:

---
vncserver_listen = ::
---

However, this very same line works on Kilo.

I'm running Liberty on Trusty, using Ubuntu Cloud Archive.

Thanks!
Thiago

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1507834

Title:
  Liberty: vncserver_listen doesn't bind anymore in a dual-stack setup

Status in OpenStack Compute (nova):
  New

Bug description:
  Guys,

  The vncserver_listen doesn't work anymore in a dual-stack setup,
  error:

  2015-10-19 21:22:03.180 2737 ERROR nova.compute.manager [instance:
  9fc58e29-41a0-44a8-aa31-1773d5cdea13] 2015-10-20T01:22:02.895921Z
  qemu-system-x86_64: -vnc [::]:0: Failed to start VNC server on
  `(null)': address resolution failed for [::]:5900: Name or service not
  known

  Line in /etc/nova/nova.conf, group [vnc]:

  ---
  vncserver_listen = ::
  ---

  However, this very same line works on Kilo.

  I'm running Liberty on Trusty, using Ubuntu Cloud Archive.

  Thanks!
  Thiago

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1507834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507816] [NEW] Fix method name in Intergration tests for Instance tests

2015-10-19 Thread Amogh
Public bug reported:

self._wait_till_spinner_disappears() is changed to
self.wait_till_popups_disappear().

This needs to be changed for instance tests: in instancespage.py

** Affects: horizon
 Importance: Undecided
 Assignee: Amogh (amogh-r-mavinagidad)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507816

Title:
  Fix method name in Intergration tests for Instance tests

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  self._wait_till_spinner_disappears() is changed to
  self.wait_till_popups_disappear().

  This needs to be changed for instance tests: in instancespage.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507776] [NEW] Wrong OVS flows created for new networks

2015-10-19 Thread Claudiu Belu
Public bug reported:

neutron-openvswitch-agent seems to create wrong OVS flows for newly
created networks. This causes package losses, including lost DHCP
requests, resulting in instances that did not receive an IP. This can
cause tempest tests to fail.

Restarting the neutron-openvswitch-agent will result in properly created
OVS flows, and the traffic flowing properly.

This issue has been observed in the Liberty release.

Details: http://paste.openstack.org/show/476764/

IRC discussion: http://eavesdrop.openstack.org/irclogs/%23openstack-
neutron/%23openstack-neutron.2015-10-19.log.html#t2015-10-19T20:27:31

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: liberty-backport-potential ovs

** Description changed:

  neutron-openvswitch-agent seems to create wrong OVS flows for newly
  created networks. This causes package losses, including lost DHCP
  requests, resulting in instances that did not receive an IP. This can
  cause tempest tests to fail.
+ 
+ Restarting the neutron-openvswitch-agent will result in properly created
+ OVS flows, and the traffic flowing properly.
  
  This issue has been observed in the Liberty release.
  
  Details: http://paste.openstack.org/show/476764/
  
  IRC discussion: http://eavesdrop.openstack.org/irclogs/%23openstack-
  neutron/%23openstack-neutron.2015-10-19.log.html#t2015-10-19T20:27:31

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507776

Title:
  Wrong OVS flows created for new networks

Status in neutron:
  New

Bug description:
  neutron-openvswitch-agent seems to create wrong OVS flows for newly
  created networks. This causes package losses, including lost DHCP
  requests, resulting in instances that did not receive an IP. This can
  cause tempest tests to fail.

  Restarting the neutron-openvswitch-agent will result in properly
  created OVS flows, and the traffic flowing properly.

  This issue has been observed in the Liberty release.

  Details: http://paste.openstack.org/show/476764/

  IRC discussion: http://eavesdrop.openstack.org/irclogs/%23openstack-
  neutron/%23openstack-neutron.2015-10-19.log.html#t2015-10-19T20:27:31

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507770] [NEW] _restore_local_vlan_map raises exception for untagged flat networks

2015-10-19 Thread Eric Larese
Public bug reported:

This line:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L359

can raise exception when _restore_local_vlan_map runs in environments
with flat or untagged VLAN networks.  For example:

  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 301, in __init__
 self._restore_local_vlan_map()
   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 360, in _restore_local_vlan_map
 'segmentation_id']),
 ValueError: invalid literal for int() with base 10: 'None'


It appears the reason for this is because the command "ovs-vsctl list Port" is 
returning port entries that have segmentation_id as None, for example like this:

other_config: {net_uuid="34baaa59-db42-4551-a1f0-0b2af85c288b",
network_type=flat, physical_network=default, segmentation_id=None}

This code already handles the case where net_uuid is missing or
local_vlan is DEAD_VLAN_TAG, but I think it should also handle the case
where segmentation_id is missing.

Should I propose this change?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507770

Title:
  _restore_local_vlan_map raises exception for untagged flat networks

Status in neutron:
  New

Bug description:
  This line:

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L359

  can raise exception when _restore_local_vlan_map runs in environments
  with flat or untagged VLAN networks.  For example:

File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 301, in __init__
   self._restore_local_vlan_map()
 File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 360, in _restore_local_vlan_map
   'segmentation_id']),
   ValueError: invalid literal for int() with base 10: 'None'

  
  It appears the reason for this is because the command "ovs-vsctl list Port" 
is returning port entries that have segmentation_id as None, for example like 
this:

  other_config:
  {net_uuid="34baaa59-db42-4551-a1f0-0b2af85c288b", network_type=flat,
  physical_network=default, segmentation_id=None}

  This code already handles the case where net_uuid is missing or
  local_vlan is DEAD_VLAN_TAG, but I think it should also handle the
  case where segmentation_id is missing.

  Should I propose this change?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507761] Re: qos wrong units in max-burst-kbps option

2015-10-19 Thread Assaf Muller
The OVS documentation refers to documentation of its own implementation.
The Neutron QoS API offers an abstraction that happens to have only one
implementation at this time, but has your own LB patch
(https://review.openstack.org/236210) as a second one. As long as
something translates from the API's unit of measurement to the different
implementations unit of measurement, we're fine. We're not bound to any
one implementation's documentation.

Assigning to Miguel for further triaging.

** Changed in: neutron
 Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507761

Title:
  qos wrong units in max-burst-kbps option

Status in neutron:
  Opinion

Bug description:
  In neutron in qos bw limit rule table in database and in API extension
  parameter "max-burst-kbps" has got wrong units suggested. Burst should
  be given in kb instead of kbps because according to for example ovs
  documentation: http://openvswitch.org/support/config-cookbooks/qos-
  rate-limiting/ it is "a parameter to the policing algorithm to
  indicate the maximum amount of data (in Kb) that this interface can
  send beyond the policing rate."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424549] Re: enlisting of nodes: seed_random fails due to self signed certificate

2015-10-19 Thread Mike Pontillo
Actually, I'll go ahead and mark this "Triaged"; it *is* a real bug, it
just isn't as critical as we assumed.

To fix this bug, we should configure cloud-init to NOT call pollinate
during enlistment (to avoid this spurious error).

As a follow-on fix, it might be a good idea for cloud-init to fall back
to 'insecure" mode (or simply use the public CA roots in /etc/ssl/certs
rather than a pinned chain) and log this as a warning, if the pinned
certificate could not be validated.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1424549

Title:
  enlisting of nodes: seed_random fails due to self signed certificate

Status in cloud-init:
  New
Status in MAAS:
  Triaged

Bug description:
  Using Maas 1.7.1 on trusty, the following error message in the MAAS
  provided ephemeral image for the step pollinate is executed:

  curl: SSL certificate problem: self signed certificate in certificate
  chain.

  This way random number generator is not initialized correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1424549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507761] [NEW] qos wrong units in max-burst-kbps option

2015-10-19 Thread Slawek Kaplonski
Public bug reported:

In neutron in qos bw limit rule table in database and in API extension
parameter "max-burst-kbps" has got wrong units suggested. Burst should
be given in kb instead of kbps because according to for example ovs
documentation: http://openvswitch.org/support/config-cookbooks/qos-rate-
limiting/ it is "a parameter to the policing algorithm to indicate the
maximum amount of data (in Kb) that this interface can send beyond the
policing rate."

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507761

Title:
  qos wrong units in max-burst-kbps option

Status in neutron:
  New

Bug description:
  In neutron in qos bw limit rule table in database and in API extension
  parameter "max-burst-kbps" has got wrong units suggested. Burst should
  be given in kb instead of kbps because according to for example ovs
  documentation: http://openvswitch.org/support/config-cookbooks/qos-
  rate-limiting/ it is "a parameter to the policing algorithm to
  indicate the maximum amount of data (in Kb) that this interface can
  send beyond the policing rate."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472999] Re: filter doesn't handle unicode charaters

2015-10-19 Thread Doug Hellmann
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

** Changed in: python-novaclient
Milestone: None => 2.32.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472999

Title:
  filter doesn't handle unicode charaters

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (nova):
  Fix Committed
Status in python-glanceclient:
  In Progress
Status in python-novaclient:
  Fix Released

Bug description:
  1 go to project/instances
  2. insert 'ölk' into filter field
  3. enter filter
  4. 
  UnicodeEncodeError at /project/instances/

  'ascii' codec can't encode character u'\xf6' in position 0: ordinal
  not in range(128)

  Request Method:   GET
  Request URL:  http://localhost:8000/project/instances/
  Django Version:   1.8.2
  Exception Type:   UnicodeEncodeError
  Exception Value:  

  'ascii' codec can't encode character u'\xf6' in position 0: ordinal
  not in range(128)

  Exception Location:   /usr/lib64/python2.7/urllib.py in urlencode, line 1347
  Python Executable:/usr/bin/python
  Python Version:   2.7.10
  Python Path:  

  ['/home/mrunge/work/horizon',
   '/usr/lib64/python27.zip',
   '/usr/lib64/python2.7',
   '/usr/lib64/python2.7/plat-linux2',
   '/usr/lib64/python2.7/lib-tk',
   '/usr/lib64/python2.7/lib-old',
   '/usr/lib64/python2.7/lib-dynload',
   '/usr/lib64/python2.7/site-packages',
   '/usr/lib64/python2.7/site-packages/gtk-2.0',
   '/usr/lib/python2.7/site-packages',
   '/home/mrunge/work/horizon/openstack_dashboard']

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507748] [NEW] Request to release 'networking-ale-omniswitch' sub-project as part of Liberty main release

2015-10-19 Thread VADIVEL POONATHAN
Public bug reported:

As per the release process of Neutron sub-project, this bug report is to
request the Neutron release team to tag and release "networking-ale-
omniswitch" sub-project along with Liberty main release.

https://launchpad.net/networking-ale-omniswitch
https://pypi.python.org/pypi/networking-ale-omniswitch

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Summary changed:

- Release to release networking-ale-omniswitch sub-project as part of Liberty 
main release 
+ Request to release 'networking-ale-omniswitch' sub-project as part of Liberty 
main release

** Tags added: release-subproject

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507748

Title:
  Request to release 'networking-ale-omniswitch' sub-project as part of
  Liberty main release

Status in neutron:
  New

Bug description:
  As per the release process of Neutron sub-project, this bug report is
  to request the Neutron release team to tag and release "networking-
  ale-omniswitch" sub-project along with Liberty main release.

  https://launchpad.net/networking-ale-omniswitch
  https://pypi.python.org/pypi/networking-ale-omniswitch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507752] [NEW] Rally change breaks Keystone rally job

2015-10-19 Thread Davanum Srinivas (DIMS)
Public bug reported:

This change:
http://git.openstack.org/cgit/openstack/rally/diff/rally/plugins/openstack/scenarios/keystone/basic.py?id=f871de842214f103b4841160e90c73cd98c4f5ad

Breaks this job:
http://logs.openstack.org/74/231574/6/check/gate-rally-dsvm-keystone/57d4dfc/rally-plot/results.html.gz#/KeystoneBasic.create_user/failures

Traceback:
Traceback (most recent call last):
  File "/opt/stack/new/rally/rally/task/runner.py", line 64, in 
_run_scenario_once
method_name)(**kwargs) or scenario_output
  File 
"/opt/stack/new/rally/rally/plugins/openstack/scenarios/keystone/basic.py", 
line 33, in create_user
self._user_create(**kwargs)
  File "/opt/stack/new/rally/rally/task/atomic.py", line 83, in 
func_atomic_actions
f = func(self, *args, **kwargs)
  File 
"/opt/stack/new/rally/rally/plugins/openstack/scenarios/keystone/utils.py", 
line 45, in _user_create
name, password=password, email=email, **kwargs)
TypeError: create() got an unexpected keyword argument 'name_length'

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: rally
 Importance: Undecided
 Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1507752

Title:
  Rally change breaks Keystone rally job

Status in Keystone:
  New
Status in Rally:
  New

Bug description:
  This change:
  
http://git.openstack.org/cgit/openstack/rally/diff/rally/plugins/openstack/scenarios/keystone/basic.py?id=f871de842214f103b4841160e90c73cd98c4f5ad

  Breaks this job:
  
http://logs.openstack.org/74/231574/6/check/gate-rally-dsvm-keystone/57d4dfc/rally-plot/results.html.gz#/KeystoneBasic.create_user/failures

  Traceback:
  Traceback (most recent call last):
File "/opt/stack/new/rally/rally/task/runner.py", line 64, in 
_run_scenario_once
  method_name)(**kwargs) or scenario_output
File 
"/opt/stack/new/rally/rally/plugins/openstack/scenarios/keystone/basic.py", 
line 33, in create_user
  self._user_create(**kwargs)
File "/opt/stack/new/rally/rally/task/atomic.py", line 83, in 
func_atomic_actions
  f = func(self, *args, **kwargs)
File 
"/opt/stack/new/rally/rally/plugins/openstack/scenarios/keystone/utils.py", 
line 45, in _user_create
  name, password=password, email=email, **kwargs)
  TypeError: create() got an unexpected keyword argument 'name_length'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1507752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507723] [NEW] Octavia Barbican cert manager broken

2015-10-19 Thread German Eichberger
Public bug reported:

Followed instruction on
https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-to-create-tls-
loadbalancer#Create_TLS_enabled_load_balancer: to create a TLS listener.

After setting the undocumented cert_manager option to barbican I get the
following error:

015-10-14 05:18:12.138 24174 DEBUG keystoneclient.session [-] RESP: [300] Conte
nt-Length: 351 Content-Type: application/json; charset=UTF-8 Connection: close
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2015-04-28T
00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vn
d.openstack.key-manager-v1+json"}], "id": "v1", "links": [{"href": "http://172.1
6.90.132:9311/v1/", "rel": "self"}, {"href": "http://docs.openstack.org/";, "type
": "text/html", "rel": "describedby"}]}]}}
 _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneclient/sessio
n.py:216
2015-10-14 05:18:12.139 24174 DEBUG keystoneclient.session [-] REQ: curl -g -i -
X POST http://172.16.90.132:9311/v1/containers/29c5530d-400f-4c15-ae0e-ed16b4dd1
dc3/consumers/ -H "User-Agent: python-keystoneclient" -H "Content-Type: applicat
ion/json" -H "X-Auth-Token: {SHA1}8653790acd55f3ec43967c55b93c524be8b3521a" -d '
{"URL": null, "name": "Octavia"}' _http_log_request /usr/local/lib/python2.7/dis
t-packages/keystoneclient/session.py:198
2015-10-14 05:18:12.243 24174 DEBUG keystoneclient.session [-] RESP: [400] Conne
ction: close Content-Type: application/json; charset=UTF-8 Content-Length: 159 x
-openstack-request-id: req-528d0877-f2c9-4853-8040-aad3e6fa62ca
RESP BODY: {"code": 400, "description": "Provided object does not match schema '
Consumer': None is not of type 'string'. Invalid property: 'URL'", "title": "Bad
 Request"}
 _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneclient/sessio
n.py:216
2015-10-14 05:18:12.244 24174 DEBUG barbicanclient.client [-] Response status 40
0 _check_status_code /opt/stack/python-barbicanclient/barbicanclient/client.py:8
9
2015-10-14 05:18:12.244 24174 ERROR barbicanclient.client [-] 4xx Client error:
Bad Request
2015-10-14 05:18:12.244 24174 ERROR octavia.certificates.manager.barbican [-] Er
ror getting http://172.16.90.132:9311/v1/containers/29c5530d-400f-4c15-ae0e-ed16
b4dd1dc3: Bad Request
2015-10-14 05:18:12.248 24174 WARNING octavia.controller.worker.controller_worke
r [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.ListenerUpdate'
 (f36941f6-a553-48b8-b30b-6e375619ca9a) transitioned into state 'FAILURE' from s
tate 'RUNNING'
1 predecessors (most recent atoms first):
  octavia-create-listener_flow
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker
Traceback (most recent call last):
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker
  File "/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/ex
ecutor.py", line 82, in _execute_task
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
result = task.execute(**arguments)
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 53, in execute
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
self.amphora_driver.update(listener, vip)
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
64, in update
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
certs = self._process_tls_certificates(listener)
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
142, in _process_tls_certificates
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
self.cert_manager.get_cert(listener.tls_certificate_id))
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/octavia/octavia/certificates/manager/barbican.py", line 150, 
in get_cert
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
).format(cert_ref, str(e)))
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
195, in __exit__
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
six.reraise(self.type_, self.value, self.tb)
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/octavia/octavia/certificates/manager/barbican.py", line 143, 
in get_cert
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
url=resource_ref
2015-10-14 05:18:12.248 24174 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/python-barbican

[Yahoo-eng-team] [Bug 1498315] Re: Suggest do not display the lbaas namespace interface ip when associate floating ip.

2015-10-19 Thread Ryan Moats
openstack/juno is security support

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498315

Title:
  Suggest do not display the lbaas namespace interface ip when associate
  floating ip.

Status in neutron:
  Won't Fix

Bug description:
  1. Create a lb pool and vip.
  2. associate vip to a floatingip,then you can see two IPs, one is lb vip, 
another one is lbaas namespace interface ip address,so suggest only display vip 
address since lbaas namespace interface ip is not seen for user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507703] [NEW] Karma for OSD doesn't load templates

2015-10-19 Thread Matt Borland
Public bug reported:

The openstack_dashboard Karma configuration is not looking in the right
place for templates and other resources.  For example, you cannot use
the 'templates' module to load templates for use in directive testing.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507703

Title:
  Karma for OSD doesn't load templates

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The openstack_dashboard Karma configuration is not looking in the
  right place for templates and other resources.  For example, you
  cannot use the 'templates' module to load templates for use in
  directive testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505677] Re: oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-conductor log

2015-10-19 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.versionedobjects
   Status: Fix Committed => Fix Released

** Changed in: oslo.versionedobjects
Milestone: None => 0.12.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505677

Title:
  oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-
  conductor log

Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-ansible:
  Fix Committed
Status in oslo.versionedobjects:
  Fix Released

Bug description:
  In nova-conductor we're seeing the following error for stable/liberty:

  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, 
objname, objmethod, object_versions, args, kwargs)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 477, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher if 
isinstance(result, nova_object.NovaObject) else result)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
535, in obj_to_primitive
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
version_manifest)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
507, in obj_make_compatible_from_manifest
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher return 
self.obj_make_compatible(primitive, target_version)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/instance.py", line 1325, 
in obj_make_compatible
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
target_version)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/base.py", line 262, in 
obj_make_compatible
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
rel_versions = self.obj_relationships['objects']
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher KeyError: 
'objects'
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher

  More details here:
  
http://logs.openstack.org/56/233756/8/check/gate-openstack-ansible-dsvm-commit/879f745/logs/aio1_nova_conductor_container-5ec67682/nova-conductor.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492505] Re: py34 intermittent failure

2015-10-19 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.messaging
   Status: Fix Committed => Fix Released

** Changed in: oslo.messaging
Milestone: None => 2.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492505

Title:
  py34 intermittent failure

Status in neutron:
  Fix Released
Status in oslo.messaging:
  Fix Released

Bug description:
  An instance here:

  http://logs.openstack.org/56/220656/1/gate/gate-neutron-
  python34/e2c4460/testr_results.html.gz

  message:"Bad checksum - calculated"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQmFkIGNoZWNrc3VtIC0gY2FsY3VsYXRlZFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDQxNDI0Nzk0Nzc1fQ==

  This has been observed in a couple of py34 jobs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505313] Re: nova kilo and contrail 2.2, instances fail to boot throwing an endpoint_override error

2015-10-19 Thread Max
Resolved:

After upgrading from Juno to Kilo python-neutronclient was stuck at an
older version.

apt-get install python-neutronclient=1:2.3.11-0ubuntu1.2~cloud0 all
nodes fixed the issue.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Status: Opinion => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505313

Title:
  nova kilo and contrail 2.2, instances fail to boot throwing an
  endpoint_override error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I am running:

  Ubuntu 14.04

  Contrail 2.20
  root@cmpt02:~# dpkg -l | grep contrail
  ii  contrail-lib 2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail libraries
  ii  contrail-nodemgr 2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail nodemgr implementation
  ii  contrail-nova-driver 2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenStack Nova compute-node driver for OpenContrail
  ii  contrail-utils   2.20+0~1441967460.80~1.bb1145b   
 amd64OpenContrail tools and utilities
  ii  contrail-vrouter-agent   2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail vrouter agent
  ii  contrail-vrouter-dkms2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail VRouter - DKMS version
  ii  contrail-vrouter-utils   2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail VRouter - Utilities
  ii  python-backports.ssl-match-hostname  3.4.0.2-1contrail1   
 all  The ssl.match_hostname() function from Python 3.4
  ii  python-bitarray  0.8.0-2contrail1 
 amd64Python module for efficient boolean array handling
  ii  python-contrail  2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail python-libs
  ii  python-contrail-vrouter-api  2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail vrouter agent api
  ii  python-geventhttpclient  1.1.0-1contrail1 
 amd64http client library for gevent
  ii  python-opencontrail-vrouter-netns2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenContrail vrouter network namespace package

  
  OpenStack Kilo packages
  root@cmpt02:~# dpkg -l | grep nova
  ii  contrail-nova-driver 2.20+0~1443532552.81~1.c0c6c68   
 amd64OpenStack Nova compute-node driver for OpenContrail
  ii  nova-common1:2015.1.1-0ubuntu1~cloud2 
   all  OpenStack Compute - common files
  ii  nova-compute1:2015.1.1-0ubuntu1~cloud2
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm   1:2015.1.1-0ubuntu1~cloud2
all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt   1:2015.1.1-0ubuntu1~cloud2
all  OpenStack Compute - compute node libvirt support
  ii  python-nova   1:2015.1.1-0ubuntu1~cloud2  
  all  OpenStack Compute Python libraries
  ii  python-novaclient1:2.22.0-0ubuntu1~cloud0 
 all  client library for OpenStack Compute API

  
  Nova will not boot an instance; debug log snippet from 
/var/log/nova/nova-compute.log:

  ERROR nova.compute.manager [-] Instance failed network setup after 1 
attempt(s)
  TRACE nova.compute.manager Traceback (most recent call last):
  TRACE nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1782, in 
_allocate_network_async
  TRACE nova.compute.manager dhcp_options=dhcp_options)
  TRACE nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 406, in 
allocate_for_instance
  TRACE nova.compute.manager neutron = get_client(context)
  TRACE nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 221, in 
get_client
  TRACE nova.compute.manager region_name=CONF.neutron.region_name)
  TRACE nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1200, in 
__init__
  TRACE nova.compute.manager self.httpclient = 
client.construct_http_client(**kwargs)
  TRACE nova.compute.manager TypeError: construct_http_client() got an 
unexpected keyword argument 'endpoint_override'

  
  2015-10-12 18:40:06.334 27007 ERROR nova.compute.manager 
[req-58ce7b2e-bc73-40ee-a368-cfbeaed434ca 42329176f69a4cc1b7d5e6ae805080cd 
7812bd244b7f4a8eba3a5cb1213210a5 - - -] [instance: 0713d74f-fe59-4992-894c-
  e3378fb1752d] Instance failed to spawn
  2015-10-12 18:40:06.334 27007 TRACE nova.compute.manager Traceback (most 
recent call last):
  201

[Yahoo-eng-team] [Bug 1507684] [NEW] Unable to establish tunnel across hypervisor

2015-10-19 Thread Romil Gupta
Public bug reported:


As part of networking-vsphere project which runs ovsvapp agent
on each ESXi host inside service VM, which talk to neutron-server
having l2pop enabled in a multi-hypervisor mode like KVM, ESXi.
The tunnels are not getting established between KVM compute node
and ESXi host. The l2pop mech_driver needs to embrace ovsvapp agent
to form the tunnels.

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507684

Title:
  Unable to establish tunnel across hypervisor

Status in neutron:
  In Progress

Bug description:
  
  As part of networking-vsphere project which runs ovsvapp agent
  on each ESXi host inside service VM, which talk to neutron-server
  having l2pop enabled in a multi-hypervisor mode like KVM, ESXi.
  The tunnels are not getting established between KVM compute node
  and ESXi host. The l2pop mech_driver needs to embrace ovsvapp agent
  to form the tunnels.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507672] [NEW] [VPNaaS] failures when updating admin_state of ipsec connections

2015-10-19 Thread Elena Ezhova
Public bug reported:

When updating admin_state of a functioning ipsec connection to DOWN, it
can be seen in vpn agent logs that pluto fails to restart with the
following error:

2015-10-19 14:05:11.622 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Command: ['ip', 'netns', 'exec', 
u'qrouter-c758b05b-72fe-4cad-b6a3-696fa0741ed8', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.2', '--config', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/etc/ipsec.conf',
 u'2d87fe22-47f4-4e37-a172-39990942db79']
2015-10-19 14:05:11.622 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Exit code: 1
2015-10-19 14:05:11.622 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdin:
2015-10-19 14:05:11.622 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdout: conn '2d87fe22-47f4-4e37-a172-39990942db79': not found (tried aliases)

(http://paste.openstack.org/show/476720/)

And, if we try to update connection's admin_state to UP, pluto doesn't
start at all due conflict with already existing process:

2015-10-19 14:06:29.271 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
RuntimeError:
2015-10-19 14:06:29.271 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Command: ['ip', 'netns', 'exec', 
u'qrouter-c758b05b-72fe-4cad-b6a3-696fa0741ed8', 'ipsec', 'pluto', '--ctlbase', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/var/run/pluto',
 '--ipsecdir', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/etc', 
'--use-netkey', '--uniqueids', '--nat_traversal', '--secretsfile', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/etc/ipsec.secrets',
 '--virtual_private', u'%v4:10.0.2.0/24,%v4:10.0.1.0/24']
2015-10-19 14:06:29.271 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Exit code: 10
2015-10-19 14:06:29.271 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdin:
2015-10-19 14:06:29.271 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdout:
2015-10-19 14:06:29.271 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stderr: adjusting ipsec.d to 
/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/etc
2015-10-19 14:06:29.271 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
pluto: lock file 
"/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/var/run/pluto.pid"
 already exists

(http://paste.openstack.org/show/476722/)

The reason is that given connection wasn't included into ipsec.conf
because it had admin_state_up=False [1]. We have to skip loading such
connections into pluto on start.

[1] https://github.com/openstack/neutron-
vpnaas/blob/master/neutron_vpnaas/services/vpn/device_drivers/template/openswan/ipsec.conf.template#L8

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Elena Ezhova (eezhova)

** Description changed:

- When updating admin_state of functioning ipsec connection to DOWN, it
+ When updating admin_state of a functioning ipsec connection to DOWN, it
  can be seen in vpn agent logs that pluto fails to restart with the
  following error:
  
  2015-10-19 14:05:11.622 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Command: ['ip', 'netns', 
'exec', u'qrouter-c758b05b-72fe-4cad-b6a3-696fa0741ed8', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.2', '--config', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/etc/ipsec.conf',
 u'2d87fe22-47f4-4e37-a172-39990942db79']
  2015-10-19 14:05:11.622 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Exit code: 1
  2015-10-19 14:05:11.622 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Stdin:
  2015-10-19 14:05:11.622 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Stdout: conn 
'2d87fe22-47f4-4e37-a172-39990942db79': not found (tried aliases)
  
  (http://paste.openstack.org/show/476720/)
  
  And, if we try to update connection's admin_state to UP, pluto doesn't
  start at all due conflict with already existing process:
  
  2015-10-19 14:06:29.271 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec RuntimeError:
  2015-10-19 14:06:29.271 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Command: ['ip', 'netns', 
'exec', u'qrouter-c758b05b-72fe-4cad-b6a3-696fa0741ed8', 'ipsec', 'pluto', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/var/run/pluto',
 '--ipsecdir', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/etc', 
'--use-netkey', '--uniqueids', '--nat_traversal', '--secretsfile', 
u'/opt/stack/data/neutron/ipsec/c758b05b-72fe-4cad-b6a3-696fa0741ed8/etc/ipsec.secrets',
 '--virtual_private', u'%v4

[Yahoo-eng-team] [Bug 1507656] [NEW] RPC callbacks push/pull mechanism should be remotable

2015-10-19 Thread Ihar Hrachyshka
Public bug reported:

That would allow agents to fetch an object as if they have access to
database.

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: oslo qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507656

Title:
  RPC callbacks push/pull mechanism should be remotable

Status in neutron:
  Confirmed

Bug description:
  That would allow agents to fetch an object as if they have access to
  database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507654] [NEW] Use VersionedObjectSerializer for RPC push/pull interfaces

2015-10-19 Thread Ihar Hrachyshka
Public bug reported:

Instead of reimplementing the serialization in neutron, allow
oslo.versionedobjects to handle it by using their own serializer.

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: oslo qos

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: oslo qos

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507654

Title:
  Use VersionedObjectSerializer for RPC push/pull interfaces

Status in neutron:
  Confirmed

Bug description:
  Instead of reimplementing the serialization in neutron, allow
  oslo.versionedobjects to handle it by using their own serializer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507585] Re: the neutron prompt inaccuracy information when delete the interface from a router

2015-10-19 Thread Ryan Moats
** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Confirmed

** Summary changed:

- the neutron prompt inaccuracy information when  delete the interface from  a 
router
+ router-interface-delete information prompt is inaccurate

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507585

Title:
  router-interface-delete information prompt is inaccurate

Status in python-neutronclient:
  Confirmed

Bug description:
  reproduce step
  1 when I try to delete a interface from a router , the neutron ask a 
subnet-ID instead of interface ID , but the help prompt the direct me input a 
INTERFACE ID,
  I don't know what different with interface ID and subnet ID ?  but I think 
this should be  have a reproduce step consistent name or prompt
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f
  usage: neutron router-interface-delete [-h] [--request-format {json,xml}]
     ROUTER INTERFACE<
  neutron router-interface-delete: error: too few arguments
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 6fcd183a-585b-434c-be45-bb8abbb946b5
  Unable to find subnet with name 
'6fcd183a-585b-434c-be45-bb8abbb946b5'<
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 7ef8b18b-489f-4f9c-922b-685651fc6eb6
  Removed interface from router fe765595-3749-40df-82bf-5c985701080f.
  [root@nitinserver1 ~(keystone_admin)]# neutron   router-port-list
fe765595-3749-40df-82bf-5c985701080f
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | c46628a7-3448-43b5-bf58-5fb832e38c21 |  | fa:16:3e:b7:d7:7d | 
{"subnet_id": "7ab67bd0-7cb0-4e47-bd2e-0aa277ebc31c", "ip_address": "20.1.1.1"} 
|
  
+--+--+---+-+
  [root@nitinserver1 ~(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1507585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507620] [NEW] vm i/o blocked random at low i/o Pressure

2015-10-19 Thread tangyi
Public bug reported:

 vm occur i/o blocked about every five days. vm log as below:

INFO: task jbd2/dm-0-8:373 blocked for more than 120 seconds. Not
tainted 2.6.32-504.30.3.el6.x86_64 #1 "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
jbd2/dm-0-8 D  0 373 2 0x 880037f5bd20
0046 000158c0 000158c0 88000220f980
000158c0 000158c0 880037e48040 81a8d020
8160dd00 880037e485f8 880037f5bfd8 Call Trace:
[] ? prepare_to_wait+0x4e/0x80 []
jbd2_journal_commit_transaction+0x19f/0x1500 [jbd2] []
? lock_timer_base+0x3c/0x70 [] ?
autoremove_wake_function+0x0/0x40 []
kjournald2+0xb8/0x220 [jbd2] [] ?
autoremove_wake_function+0x0/0x40 [] ?
kjournald2+0x0/0x220 [jbd2] [] kthread+0x9e/0xc0
[] child_rip+0xa/0x20 [] ?
kthread+0x0/0xc0 [] ? child_rip+0x0/0x20 INFO: task
auditd:978 blocked for more than 120 seconds. Not tainted
2.6.32-504.30.3.el6.x86_64 #1 "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message. auditd D
 0 978 1 0x 88007c261a88 0086
 88002d1993d8 88002d1993d8 880079cbf4e8
00035e86aaf88734  88007c261a28 000138823544
8800379365f8 88007c261fd8 Call Trace: []
start_this_handle+0x25a/0x480 [jbd2] [] ?
autoremove_wake_function+0x0/0x40 []
jbd2_journal_start+0xb5/0x100 [jbd2] []
ext4_journal_start_sb+0x56/0xe0 [ext4] []
ext4_dirty_inode+0x2a/0x60 [ext4] []
__mark_inode_dirty+0x3b/0x160 []
file_update_time+0xf2/0x170 []
__generic_file_aio_write+0x230/0x490 []
generic_file_aio_write+0x88/0x100 []
ext4_file_write+0x58/0x190 [ext4] []
do_sync_write+0xfa/0x140 [] ?
jbd2_log_wait_commit+0xf5/0x140 [jbd2] [] ?
ext4_statfs+0xef/0x200 [ext4] [] ?
autoremove_wake_function+0x0/0x40 [] ?
do_statfs_native+0x98/0xb0 [] ?
security_file_permission+0x16/0x20 []
vfs_write+0xb8/0x1a0 [] sys_write+0x51/0x90
[] system_call_fastpath+0x16/0x1b INFO: task rs:main
Q:Reg:1017 blocked for more than 120 seconds. Not tainted
2.6.32-504.30.3.el6.x86_64 #1 "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message. rs:main
Q:Reg D  0 1017 1 0x0080 88007a5f3a88
0086  000293d8 
88000220f660 00035e86aa672ebb 0008 000158c0
000138823478 880037d01068 88007a5f3fd8 Call Trace:
[] start_this_handle+0x25a/0x480 [jbd2]
[] ? cache_alloc_refill+0x15b/0x240
[] ? autoremove_wake_function+0x0/0x40
[] jbd2_journal_start+0xb5/0x100 [jbd2]
[] ext4_journal_start_sb+0x56/0xe0 [ext4]
[] ext4_dirty_inode+0x2a/0x60 [ext4]
[] __mark_inode_dirty+0x3b/0x160 []
file_update_time+0xf2/0x170 []
__generic_file_aio_write+0x230/0x490 []
generic_file_aio_write+0x88/0x100 []
ext4_file_write+0x58/0x190 [ext4] []
do_sync_write+0xfa/0x140 [] ?
autoremove_wake_function+0x0/0x40 [] ?
security_file_permission+0x16/0x20 []
vfs_write+0xb8/0x1a0 [] sys_write+0x51/0x90
[] ? __audit_syscall_exit+0x25e/0x290
[] system_call_fastpath+0x16/0x1b INFO: task
flush-253:0:1016 blocked for more than 120 seconds. Not tainted
2.6.32-504.30.3.el6.x86_64 #1 "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
flush-253:0 D  0 1016 2 0x0080 880037fbb430
0046  81041e98 8800
092625a3 00035e80853b5e12 880037dcb910 0b5a83fa
00013881ce2a 880037b19ad8 880037fbbfd8 Call Trace:
[] ? pvclock_clocksource_read+0x58/0xd0
[] ? sync_buffer+0x0/0x50 []
io_schedule+0x73/0xc0 [] sync_buffer+0x40/0x50
[] __wait_on_bit_lock+0x5a/0xc0 [] ?
sync_buffer+0x0/0x50 []
out_of_line_wait_on_bit_lock+0x78/0x90 [] ?
wake_bit_function+0x0/0x50 [] ?
__find_get_block+0xa9/0x200 [] __lock_buffer+0x36/0x40
[] do_get_write_access+0x493/0x520 [jbd2]
[] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
[] __ext4_journal_get_write_access+0x38/0x80 [ext4]
[] ext4_reserve_inode_write+0x73/0xa0 [ext4]
[] ? jbd2_journal_dirty_metadata+0xff/0x150 [jbd2]
[] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4]
[] ext4_dirty_inode+0x40/0x60 [ext4]
[] __mark_inode_dirty+0x3b/0x160 []
ext4_da_update_reserve_space+0x111/0x2a0 [ext4] []
ext4_ext_get_blocks+0x72d/0x14d0 [ext4] [] ?
generic_make_request+0x240/0x5a0 [] ?
mempool_alloc_slab+0x15/0x20 []
ext4_get_blocks+0xf9/0x2b0 [ext4] [] ?
pagevec_lookup_tag+0x25/0x40 []
mpage_da_map_and_submit+0xa1/0x470 [ext4] [] ?
jbd2_journal_start+0xb5/0x100 [jbd2] []
ext4_da_writepages+0x2ee/0x620 [ext4] []
do_writepages+0x21/0x40 []
writeback_single_inode+0xdd/0x290 []
writeback_sb_inodes+0xbd/0x170 []
writeback_inodes_wb+0xab/0x1b0 []
wb_writeback+0x2f3/0x410 [] ? del_timer_sync+0x22/0x30
[] wb_do_writeback+0x1a5/0x240 []
bdi_writeback_task+0x63/0x1b0 [] ?
bit_waitqueue+0x17/0xd0 [] ? bdi_start_fn+0x0/0x100
[] bdi_start_fn+0x86/0x100 [] ?
bdi_start_fn+0x0/0x100 [] kthread+0x9e/0xc0
[] child_rip+0xa/0x20 [] ?
kthread+0x0/0xc0 [] ? child_rip+0x0/0x

[Yahoo-eng-team] [Bug 1507610] [NEW] Keystone v3 incompatable with keystone v2

2015-10-19 Thread Niall Bunting
Public bug reported:

Overview:
After an upgrade to using keystone v3 the old style of location ceases to work. 
This happens because it raises a 404 which in turn raises a 401. As it tries to 
go to the location /v2.0/auth/tokens which did not exist in v2, but rather the 
link is /v2.0/tokens.

How to reproduce:
Create an image with the 'old style' location. Something like this:

| locations| [{"url": "swift+http://service%3Aglance-   
  |
|  | 
swift:redacted@10.0.0.8:5000/v2.0/glance/2f174860-efe3-4d5a-8f73-83e7298523b8", 
 |
|  | "metadata": {}}]   

Then upgrade to keystone v3. And try to run a copy-from or image-download. Such 
as:
glance image-download 2f174860-efe3-4d5a-8f73-83e7298523b8 --file /opt/out

Output:
Keystone v3:
sudo ngrep -W byline port 5000 -d lo
interface: lo (127.0.0.0/255.0.0.0)
filter: (ip or ip6) and ( port 5000 )

T 10.0.0.8:45256 -> 10.0.0.8:5000 [AP]
POST /v2.0/auth/tokens HTTP/1.1.
Host: 10.0.0.8:5000.
Content-Length: 222.
Accept-Encoding: gzip, deflate.
Accept: application/json.
User-Agent: python-keystoneclient.
Connection: keep-alive.
Content-Type: application/json.
.
{"auth": {"scope": {"project": {"domain": {"id": "default"}, "name": 
"service"}}, "identity": {"password": {"user": {"domain": {"id": "default"}, 
"password": "redacted", "name": "glance-swift"}}, "methods": ["password"]}}}
##
T 10.0.0.8:5000 -> 10.0.0.8:45256 [AP]
HTTP/1.1 404 Not Found.
Date: Mon, 19 Oct 2015 13:54:27 GMT.
Server: Apache/2.4.7 (Ubuntu).
Vary: X-Auth-Token.
x-openstack-request-id: req-c8c78196-1b77-4b4f-b0dc-8baa5144c30f.
Content-Length: 93.
Keep-Alive: timeout=5, max=100.
Connection: Keep-Alive.
Content-Type: application/json.
.
{"error": {"message": "The resource could not be found.", "code": 404, "title": 
"Not Found"}}

V2:
sudo ngrep -W byline port 5000 -d lo
interface: lo (127.0.0.0/255.0.0.0)
filter: (ip or ip6) and ( port 5000 )

T 10.0.0.8:45247 -> 10.0.0.8:5000 [AP]
POST /v2.0/tokens HTTP/1.1.
Host: 10.0.0.8:5000.
Content-Length: 112.
Accept-Encoding: gzip, deflate.
Accept: application/json.
User-Agent: python-keystoneclient.
Connection: keep-alive.
Content-Type: application/json.
.
{"auth": {"tenantName": "service", "passwordCredentials": {"username": 
"glance-swift", "password": "redacted"}}}
##
T 10.0.0.8:5000 -> 10.0.0.8:45247 [AP]
HTTP/1.1 200 OK.
Date: Mon, 19 Oct 2015 13:53:42 GMT.
Server: Apache/2.4.7 (Ubuntu).
Vary: X-Auth-Token.
x-openstack-request-id: req-cb538a90-5134-476d-b789-27fcf0576ad8.
Content-Length: 3407.
Keep-Alive: timeout=5, max=100.
Connection: Keep-Alive.
Content-Type: application/json.

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1507610

Title:
  Keystone v3 incompatable with keystone v2

Status in Glance:
  New

Bug description:
  Overview:
  After an upgrade to using keystone v3 the old style of location ceases to 
work. This happens because it raises a 404 which in turn raises a 401. As it 
tries to go to the location /v2.0/auth/tokens which did not exist in v2, but 
rather the link is /v2.0/tokens.

  How to reproduce:
  Create an image with the 'old style' location. Something like this:

  | locations| [{"url": "swift+http://service%3Aglance- 
|
  |  | 
swift:redacted@10.0.0.8:5000/v2.0/glance/2f174860-efe3-4d5a-8f73-83e7298523b8", 
 |
  |  | "metadata": {}}]   

  Then upgrade to keystone v3. And try to run a copy-from or image-download. 
Such as:
  glance image-download 2f174860-efe3-4d5a-8f73-83e7298523b8 --file /opt/out

  Output:
  Keystone v3:
  sudo ngrep -W byline port 5000 -d lo
  interface: lo (127.0.0.0/255.0.0.0)
  filter: (ip or ip6) and ( port 5000 )
  
  T 10.0.0.8:45256 -> 10.0.0.8:5000 [AP]
  POST /v2.0/auth/tokens HTTP/1.1.
  Host: 10.0.0.8:5000.
  Content-Length: 222.
  Accept-Encoding: gzip, deflate.
  Accept: application/json.
  User-Agent: python-keystoneclient.
  Connection: keep-alive.
  Content-Type: application/json.
  .
  {"auth": {"scope": {"project": {"domain": {"id": "default"}, "name": 
"service"}}, "identity": {"password": {"user": {"domain": {"id": "default"}, 
"password": "redacted", "name": "glance-swift"}}, "methods": ["password"]}}}
  ##
  T 10.0.0.8:5000 -> 10.0.0.8:45256 [AP]
  HTTP/1.1 404 Not Found.
  Date: Mon, 19 Oct 2015 13:54:27 GMT.
  Server: Apache/2.4.7 (Ubuntu).
  Vary: X-Auth-Token.
  x-openstack-request-id: req-c8c78196-1b77-4b4f-b0dc-8baa5144c30f.
  Content-Length: 93.
  Keep-Alive: timeout=5, max=100.
  Connection: Keep-Alive.
  Content-Type: application/json.
  .
  {"error": {"message": "The resource could not be found.", "code": 404, 
"title": "Not Found"}}

  V2:
  sudo ngrep -W byline port 5000 -d lo
  inter

[Yahoo-eng-team] [Bug 1507517] Re: When you create JobBinaries from JobTemplate, button "Choose" unavailable

2015-10-19 Thread Evgeny Sikachev
in master branch this function deprecated

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507517

Title:
  When you create JobBinaries from JobTemplate, button "Choose"
  unavailable

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  ENVIRONMENT: devstack(19 oct 2015)

  PREPEARE STEP: If it possible, remove all JobBinaries

  STEPS TO REPRODUCE:
  1. Navigate to JobBinaries
  2. Click on "Create JobTemplate"
  3.  Navigate to "Libs" tab
  4. Click on "+"
  5. Create JobBinary and click "Create"

  EXPECTED RESULT:
  Button "Choose" available
  Screenshot-1

  ACTUAL RESULT:
  Button "Choose" unavailable
  Screenshot-2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507602] [NEW] _get_router() sometimes raises RouterNotFound when called from under create_floatingip

2015-10-19 Thread Ihar Hrachyshka
Public bug reported:

https://review.openstack.org/#/c/215136/ was merged in the gate and
broke Ironic. It seems we have a weird issue in neutron db code that
makes fetch on router_id raise NotFound when updating a floating IP.

The patch was reverted in master, but we should understand what went
wrong, and maybe introduce additional testing that would not allow the
issue to sneak in.

We also should look into another patch that made similar change for
Create on FIP: https://review.openstack.org/#/c/231031/ It may also
break something.

** Affects: neutron
 Importance: High
 Assignee: Oleg Bondarev (obondarev)
 Status: Confirmed


** Tags: db l3-dvr-backlog

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: db l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507602

Title:
  _get_router() sometimes raises RouterNotFound when called from under
  create_floatingip

Status in neutron:
  Confirmed

Bug description:
  https://review.openstack.org/#/c/215136/ was merged in the gate and
  broke Ironic. It seems we have a weird issue in neutron db code that
  makes fetch on router_id raise NotFound when updating a floating IP.

  The patch was reverted in master, but we should understand what went
  wrong, and maybe introduce additional testing that would not allow the
  issue to sneak in.

  We also should look into another patch that made similar change for
  Create on FIP: https://review.openstack.org/#/c/231031/ It may also
  break something.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507516] Re: NoSuchOptError: no such option: force_config_drive

2015-10-19 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.config
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1507516

Title:
   NoSuchOptError: no such option: force_config_drive

Status in OpenStack Compute (nova):
  Triaged
Status in oslo.config:
  New

Bug description:
  When setting the configuration parameter force_config_drive=true in
  the /etc/nova/nova.conf config file and restarting the nova service or
  even rebooting the whole server the openstack-nova-compute service
  fails to start:

  /var/log/nova/nova-compute.log

  2015-10-19 11:36:10.141 5085 CRITICAL nova 
[req-f0efb89b-4c5b-49a5-b436-5af2ad20c4c5 - - - - -] NoSuchOptError: no such 
option: force_config_drive
  2015-10-19 11:36:10.141 5085 TRACE nova Traceback (most recent call last):
  2015-10-19 11:36:10.141 5085 TRACE nova   File "/usr/bin/nova-compute", line 
10, in 
  2015-10-19 11:36:10.141 5085 TRACE nova sys.exit(main())
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 74, in main
  2015-10-19 11:36:10.141 5085 TRACE nova service.wait()
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 446, in wait
  2015-10-19 11:36:10.141 5085 TRACE nova _launcher.wait()
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 187, 
in wait
  2015-10-19 11:36:10.141 5085 TRACE nova status, signo = 
self._wait_for_exit_or_signal(ready_callback)
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 165, 
in _wait_for_exit_or_signal
  2015-10-19 11:36:10.141 5085 TRACE nova CONF.log_opt_values(LOG, 
logging.DEBUG)
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2191, in 
log_opt_values
  2015-10-19 11:36:10.141 5085 TRACE nova _sanitize(opt, getattr(self, 
opt_name)))
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1874, in __getattr__
  2015-10-19 11:36:10.141 5085 TRACE nova raise NoSuchOptError(name)
  2015-10-19 11:36:10.141 5085 TRACE nova NoSuchOptError: no such option: 
force_config_drive
  2015-10-19 11:36:10.141 5085 TRACE nova

  The openstack kilo release has been deployed as an RDO all-in-one node
  without the demo data on a centos 7 platform

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1507516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507585] [NEW] the neutron prompt inaccuracy information when delete the interface from a router

2015-10-19 Thread IBM-Cloud-SH
Public bug reported:

reproduce step
1 when I try to delete a interface from a router , the neutron ask a subnet-ID 
instead of interface ID , but the help prompt the direct me input a INTERFACE 
ID,
I don't know what different with interface ID and subnet ID ?  but I think this 
should be  have a reproduce step consistent name or prompt
[root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f
usage: neutron router-interface-delete [-h] [--request-format {json,xml}]
   ROUTER INTERFACE<
neutron router-interface-delete: error: too few arguments
[root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 6fcd183a-585b-434c-be45-bb8abbb946b5
Unable to find subnet with name 
'6fcd183a-585b-434c-be45-bb8abbb946b5'<
[root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 7ef8b18b-489f-4f9c-922b-685651fc6eb6
Removed interface from router fe765595-3749-40df-82bf-5c985701080f.
[root@nitinserver1 ~(keystone_admin)]# neutron   router-port-list
fe765595-3749-40df-82bf-5c985701080f
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| c46628a7-3448-43b5-bf58-5fb832e38c21 |  | fa:16:3e:b7:d7:7d | 
{"subnet_id": "7ab67bd0-7cb0-4e47-bd2e-0aa277ebc31c", "ip_address": "20.1.1.1"} 
|
+--+--+---+-+
[root@nitinserver1 ~(keystone_admin)]#

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507585

Title:
  the neutron prompt inaccuracy information when  delete the interface
  from  a router

Status in neutron:
  New

Bug description:
  reproduce step
  1 when I try to delete a interface from a router , the neutron ask a 
subnet-ID instead of interface ID , but the help prompt the direct me input a 
INTERFACE ID,
  I don't know what different with interface ID and subnet ID ?  but I think 
this should be  have a reproduce step consistent name or prompt
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f
  usage: neutron router-interface-delete [-h] [--request-format {json,xml}]
     ROUTER INTERFACE<
  neutron router-interface-delete: error: too few arguments
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 6fcd183a-585b-434c-be45-bb8abbb946b5
  Unable to find subnet with name 
'6fcd183a-585b-434c-be45-bb8abbb946b5'<
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 7ef8b18b-489f-4f9c-922b-685651fc6eb6
  Removed interface from router fe765595-3749-40df-82bf-5c985701080f.
  [root@nitinserver1 ~(keystone_admin)]# neutron   router-port-list
fe765595-3749-40df-82bf-5c985701080f
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | c46628a7-3448-43b5-bf58-5fb832e38c21 |  | fa:16:3e:b7:d7:7d | 
{"subnet_id": "7ab67bd0-7cb0-4e47-bd2e-0aa277ebc31c", "ip_address": "20.1.1.1"} 
|
  
+--+--+---+-+
  [root@nitinserver1 ~(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503187] Re: DOC: Update glance status image with deactivate status

2015-10-19 Thread ologvinova
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1503187

Title:
  DOC: Update glance status image with deactivate status

Status in Glance:
  Fix Released

Bug description:
  Since kilo the feature to deactivate the images in glance has been added and 
this status of the image needs to be added to the diagram that is documented 
here - http://docs.openstack.org/developer/glance/statuses.html
  The updated image can be found here: 
https://github.com/openstack/glance/blob/master/doc/source/images_src/image_status_transition.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1503187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507552] Re: Changing in cisco exeption.py is requred

2015-10-19 Thread Assaf Muller
** Project changed: neutron => networking-cisco

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507552

Title:
  Changing in cisco exeption.py is requred

Status in networking-cisco:
  New

Bug description:
  Change set https://review.openstack.org/#/c/233766/ leads exeption in
  networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py

  there is issue like that
  --
  error: testr failed (3)
  Failed to import test module: 
networking_cisco.tests.unit.ml2.drivers.cisco.n1kv.test_cisco_n1kv_mech
  Traceback (most recent call last):
File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"networking_cisco/tests/unit/ml2/drivers/cisco/n1kv/test_cisco_n1kv_mech.py", 
line 23, in 
  from networking_cisco.plugins.ml2.drivers.cisco.n1kv import (
File "networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py", line 
65, in 
  class ProfileDeletionNotSupported(exceptions.NotSupported):
  AttributeError: 'module' object has no attribute 'NotSupported'

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1507552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507563] [NEW] WEBROOT not set correctly in 500 error page

2015-10-19 Thread Aleš Křivák
Public bug reported:

Template for 500 error uses CONF.WEBROOT to set Home link, where CONF is
loaded by templatetag from horizon.conf. But in
horizon/conf/__init__.py, only HORIZON_CONF is set and nothing else. As
a result, 500.html is rendered with empty url for Home link.

** Affects: horizon
 Importance: Undecided
 Assignee: Aleš Křivák (aleskrivak)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Aleš Křivák (aleskrivak)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507563

Title:
  WEBROOT not set correctly in 500 error page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Template for 500 error uses CONF.WEBROOT to set Home link, where CONF
  is loaded by templatetag from horizon.conf. But in
  horizon/conf/__init__.py, only HORIZON_CONF is set and nothing else.
  As a result, 500.html is rendered with empty url for Home link.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507564] [NEW] os.path.remove doesn't exist. Conver task fails

2015-10-19 Thread Flavio Percoco
Public bug reported:

  File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", 
line 96, in run
for _state in self.run_iter():
  File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", 
line 146, in run_iter
self._change_state(states.FAILURE)
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__
six.reraise(self.type_, self.value, self.tb)
  File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", 
line 130, in run_iter
failure.Failure.reraise_if_any(failures)
  File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 244, 
in reraise_if_any
failures[0].reraise()
  File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 251, 
in reraise
six.reraise(*self._exc_info)
  File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 86, in _revert_task
result = task.revert(**arguments)
  File "/usr/lib/python2.7/site-packages/glance/async/flows/convert.py", line 
99, in revert
os.path.remove(fs_path)
AttributeError: 'module' object has no attribute 'remove'
2015-07-11 11:40:00.817 6663 INFO eventlet.wsgi.server 
[req-c4c286ec-2248-4d93-b631-a2270de43ec8 7a6bd2901f0642b18be91d82782ed1ca 
3954f0a8635249eba2eda9a489e233a7 - - -] 10.12.27.41 - - [11/Jul/2015 11:40:00] 
"POST /v2/tasks HTTP/1.1" 500 139

** Affects: glance
 Importance: Medium
 Status: Confirmed

** Changed in: glance
Milestone: None => mitaka-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1507564

Title:
  os.path.remove doesn't exist. Conver task fails

Status in Glance:
  Confirmed

Bug description:
File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", 
line 96, in run
  for _state in self.run_iter():
File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", 
line 146, in run_iter
  self._change_state(states.FAILURE)
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, 
in __exit__
  six.reraise(self.type_, self.value, self.tb)
File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", 
line 130, in run_iter
  failure.Failure.reraise_if_any(failures)
File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 
244, in reraise_if_any
  failures[0].reraise()
File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 
251, in reraise
  six.reraise(*self._exc_info)
File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 86, in _revert_task
  result = task.revert(**arguments)
File "/usr/lib/python2.7/site-packages/glance/async/flows/convert.py", line 
99, in revert
  os.path.remove(fs_path)
  AttributeError: 'module' object has no attribute 'remove'
  2015-07-11 11:40:00.817 6663 INFO eventlet.wsgi.server 
[req-c4c286ec-2248-4d93-b631-a2270de43ec8 7a6bd2901f0642b18be91d82782ed1ca 
3954f0a8635249eba2eda9a489e233a7 - - -] 10.12.27.41 - - [11/Jul/2015 11:40:00] 
"POST /v2/tasks HTTP/1.1" 500 139

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1507564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507492] Re: manually schedule dhcp-agent doesn't check dhcp_agents_per_network

2015-10-19 Thread Kevin Benton
This is expected behavior. That config value is to influence the
automatic scheduling. It shouldn't interfere with manual scheduling by
an admin.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507492

Title:
  manually schedule dhcp-agent doesn't check dhcp_agents_per_network

Status in neutron:
  Won't Fix

Bug description:
  We can use dhcp-agent-network-add to manually schedule a net to dhcp-
  agents. And when we manually schedule dhcp-agent, neutron code should
  check configure dhcp_agents_per_network, to verify current manually
  scheduling could be support or not.

  Pre-conditions:
  2 active dhcp-agents, agent-A and agent-B;
  network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
  set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

  steps:
  directly run "neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID" without 
remove net-1 from agent-A first.

  expected:
  warning or error to tells that couldn't schedule net-1 onto agent-B, for 
numbers of agents hosting net-1 couldn't be larger than dhcp_agents_per_network 
value.

  actual result:
  running "neutron dhcp-agent-list-hosting-net NET-1-ID" will output that net-1 
is hosted by agent-A and agent-B now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507558] Re: Ironic gate breakage: deployed VM's do not get DHCP

2015-10-19 Thread Dmitry Tantsur
I suspect Neutron is involved

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507558

Title:
  Ironic gate breakage: deployed VM's do not get DHCP

Status in Ironic:
  Confirmed
Status in neutron:
  New

Bug description:
  See e.g. https://review.openstack.org/#/c/234186/. It started around
  midnight UTC, Mon Oct 19.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1507558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507552] [NEW] Changing in cisco exeption.py is requred

2015-10-19 Thread Yaroslav Morkovnikov
Public bug reported:

Change set https://review.openstack.org/#/c/233766/ leads exeption in
networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py

there is issue like that
--
error: testr failed (3)
Failed to import test module: 
networking_cisco.tests.unit.ml2.drivers.cisco.n1kv.test_cisco_n1kv_mech
Traceback (most recent call last):
  File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File 
"networking_cisco/tests/unit/ml2/drivers/cisco/n1kv/test_cisco_n1kv_mech.py", 
line 23, in 
from networking_cisco.plugins.ml2.drivers.cisco.n1kv import (
  File "networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py", line 
65, in 
class ProfileDeletionNotSupported(exceptions.NotSupported):
AttributeError: 'module' object has no attribute 'NotSupported'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507552

Title:
  Changing in cisco exeption.py is requred

Status in neutron:
  New

Bug description:
  Change set https://review.openstack.org/#/c/233766/ leads exeption in
  networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py

  there is issue like that
  --
  error: testr failed (3)
  Failed to import test module: 
networking_cisco.tests.unit.ml2.drivers.cisco.n1kv.test_cisco_n1kv_mech
  Traceback (most recent call last):
File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"networking_cisco/tests/unit/ml2/drivers/cisco/n1kv/test_cisco_n1kv_mech.py", 
line 23, in 
  from networking_cisco.plugins.ml2.drivers.cisco.n1kv import (
File "networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py", line 
65, in 
  class ProfileDeletionNotSupported(exceptions.NotSupported):
  AttributeError: 'module' object has no attribute 'NotSupported'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234950] Re: network topology order

2015-10-19 Thread Aleš Křivák
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1234950

Title:
  network topology order

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Often when three or more networks are involved, with one external, one
  internal attached to a router which is attached to external (aka
  tenant public), and one or more backend networks the view often
  (always?) displays in the following order:

  external, private backend*, tenant public

  This causes the display to look very backwards to what one might
  expect and lines to cross that would not normally need to.

  A more intuative ordering would be:
  external networks, tenant networks attached to routers attached to external 
networks, tenant networks with no routers.

  Thanks,
  Kevin

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1234950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2015-10-19 Thread Steven Hardy
** Also affects: heat/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Committed
Status in heat liberty series:
  New
Status in Ironic:
  Fix Committed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Committed
Status in Sahara liberty series:
  Fix Committed
Status in Sahara mitaka series:
  Fix Committed

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507528] [NEW] Create sample data for policy.v3cloudsample.json

2015-10-19 Thread Hidekazu Nakamura
Public bug reported:

It is useful that there is sample data for policy.v3cloudsample.json.

** Affects: keystone
 Importance: Undecided
 Assignee: Hidekazu Nakamura (nakamura-h)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Hidekazu Nakamura (nakamura-h)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1507528

Title:
  Create sample data for policy.v3cloudsample.json

Status in Keystone:
  New

Bug description:
  It is useful that there is sample data for policy.v3cloudsample.json.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1507528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507526] [NEW] Failed to load user-data on RHEV/AltCloud in wily due to bad udevadm args

2015-10-19 Thread Darren Worrall
Public bug reported:

Using cloud-init 0.7.7 in the wily cloud-images daily:

[   12.493421] cloud-init[872]: 2015-10-19 10:14:26,302 - util.py[WARNING]: 
Failed command: /sbin/udevadm settle --quiet --timeout=5 
--exit-if-exists=/dev/fd0
[   12.493665] cloud-init[872]: Unexpected error while running command.
[   12.497403] cloud-init[872]: Command: ['/sbin/udevadm', 'settle', '--quiet', 
'--timeout=5', '--exit-if-exists=/dev/fd0']
[   12.500557] cloud-init[872]: Exit code: 1
[   12.500799] cloud-init[872]: Reason: -
[   12.501036] cloud-init[872]: Stdout: ''
[   12.501270] cloud-init[872]: Stderr: 'Option -q no longer supported.\n'
[   12.501504] cloud-init[872]: 2015-10-19 10:14:26,318 - util.py[WARNING]: 
Failed accessing user data.

Note: this is not actually *on* RHEV, I fake it to have cloud-init load
user data from a floppy drive for testing purposes, which works fine in
14.04

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1507526

Title:
  Failed to load user-data on RHEV/AltCloud in wily due to bad udevadm
  args

Status in cloud-init:
  New

Bug description:
  Using cloud-init 0.7.7 in the wily cloud-images daily:

  [   12.493421] cloud-init[872]: 2015-10-19 10:14:26,302 - util.py[WARNING]: 
Failed command: /sbin/udevadm settle --quiet --timeout=5 
--exit-if-exists=/dev/fd0
  [   12.493665] cloud-init[872]: Unexpected error while running command.
  [   12.497403] cloud-init[872]: Command: ['/sbin/udevadm', 'settle', 
'--quiet', '--timeout=5', '--exit-if-exists=/dev/fd0']
  [   12.500557] cloud-init[872]: Exit code: 1
  [   12.500799] cloud-init[872]: Reason: -
  [   12.501036] cloud-init[872]: Stdout: ''
  [   12.501270] cloud-init[872]: Stderr: 'Option -q no longer supported.\n'
  [   12.501504] cloud-init[872]: 2015-10-19 10:14:26,318 - util.py[WARNING]: 
Failed accessing user data.

  Note: this is not actually *on* RHEV, I fake it to have cloud-init
  load user data from a floppy drive for testing purposes, which works
  fine in 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1507526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507521] [NEW] Nova Resize is failing for shared storage between Compute node

2015-10-19 Thread palbhan
Public bug reported:

Nova Version: 2.22.0

I have share nfs storage mounting the /var/lib/nova between two compute
nodes. When I tried to re-sizing the instance using nova resize command,
it is failing and below is the output of log

2015-10-19 05:13:15.582 14325 ERROR oslo_messaging.rpc.dispatcher 
[req-5cb16661-74ec-4faf-93cd-044e597cc9de d4209dcd86b84fc584f8b3b72bee0c64 
da6c9fa9be0046dda47e9bd6caf3908a - - -] Exception during message handling: 
Resize error: not able to execute ssh command: Unexpected error while running 
command.
Command: ssh 20.20.20.3 mkdir -p 
/var/lib/nova/instances/744d6341-023f-49cd-9d93-7bae7eb32653
Exit code: 255
Stdout: u''
Stderr: u'Host key verification failed.\r\n'
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6748, in 
resize_instance
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
clean_shutdown=clean_shutdown)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher payload)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, in 
decorated_function
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 298, in 
decorated_function
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 377, in 
decorated_function
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 286, in 
decorated_function
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
migration.instance_uuid, exc_info=True)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 269, in 
decorated_function
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 355, in 
decorated_function
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py

[Yahoo-eng-team] [Bug 1507522] [NEW] fwaas lacks scenario tests

2015-10-19 Thread YAMAMOTO Takashi
Public bug reported:

fwaas lacks tempest scenario tests

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507522

Title:
  fwaas lacks scenario tests

Status in neutron:
  New

Bug description:
  fwaas lacks tempest scenario tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507516] [NEW] NoSuchOptError: no such option: force_config_drive

2015-10-19 Thread Jan Collijs
Public bug reported:

When setting the configuration parameter force_config_drive=true in the
/etc/nova/nova.conf config file and restarting the nova service or even
rebooting the whole server the openstack-nova-compute service fails to
start:

/var/log/nova/nova-compute.log

2015-10-19 11:36:10.141 5085 CRITICAL nova 
[req-f0efb89b-4c5b-49a5-b436-5af2ad20c4c5 - - - - -] NoSuchOptError: no such 
option: force_config_drive
2015-10-19 11:36:10.141 5085 TRACE nova Traceback (most recent call last):
2015-10-19 11:36:10.141 5085 TRACE nova   File "/usr/bin/nova-compute", line 
10, in 
2015-10-19 11:36:10.141 5085 TRACE nova sys.exit(main())
2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 74, in main
2015-10-19 11:36:10.141 5085 TRACE nova service.wait()
2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 446, in wait
2015-10-19 11:36:10.141 5085 TRACE nova _launcher.wait()
2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 187, 
in wait
2015-10-19 11:36:10.141 5085 TRACE nova status, signo = 
self._wait_for_exit_or_signal(ready_callback)
2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 165, 
in _wait_for_exit_or_signal
2015-10-19 11:36:10.141 5085 TRACE nova CONF.log_opt_values(LOG, 
logging.DEBUG)
2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2191, in 
log_opt_values
2015-10-19 11:36:10.141 5085 TRACE nova _sanitize(opt, getattr(self, 
opt_name)))
2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1874, in __getattr__
2015-10-19 11:36:10.141 5085 TRACE nova raise NoSuchOptError(name)
2015-10-19 11:36:10.141 5085 TRACE nova NoSuchOptError: no such option: 
force_config_drive
2015-10-19 11:36:10.141 5085 TRACE nova

The openstack kilo release has been deployed as an RDO all-in-one node
without the demo data on a centos 7 platform

** Affects: nova
 Importance: Undecided
 Status: New

** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507516

Title:
   NoSuchOptError: no such option: force_config_drive

Status in OpenStack Compute (nova):
  New

Bug description:
  When setting the configuration parameter force_config_drive=true in
  the /etc/nova/nova.conf config file and restarting the nova service or
  even rebooting the whole server the openstack-nova-compute service
  fails to start:

  /var/log/nova/nova-compute.log

  2015-10-19 11:36:10.141 5085 CRITICAL nova 
[req-f0efb89b-4c5b-49a5-b436-5af2ad20c4c5 - - - - -] NoSuchOptError: no such 
option: force_config_drive
  2015-10-19 11:36:10.141 5085 TRACE nova Traceback (most recent call last):
  2015-10-19 11:36:10.141 5085 TRACE nova   File "/usr/bin/nova-compute", line 
10, in 
  2015-10-19 11:36:10.141 5085 TRACE nova sys.exit(main())
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 74, in main
  2015-10-19 11:36:10.141 5085 TRACE nova service.wait()
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 446, in wait
  2015-10-19 11:36:10.141 5085 TRACE nova _launcher.wait()
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 187, 
in wait
  2015-10-19 11:36:10.141 5085 TRACE nova status, signo = 
self._wait_for_exit_or_signal(ready_callback)
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 165, 
in _wait_for_exit_or_signal
  2015-10-19 11:36:10.141 5085 TRACE nova CONF.log_opt_values(LOG, 
logging.DEBUG)
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2191, in 
log_opt_values
  2015-10-19 11:36:10.141 5085 TRACE nova _sanitize(opt, getattr(self, 
opt_name)))
  2015-10-19 11:36:10.141 5085 TRACE nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1874, in __getattr__
  2015-10-19 11:36:10.141 5085 TRACE nova raise NoSuchOptError(name)
  2015-10-19 11:36:10.141 5085 TRACE nova NoSuchOptError: no such option: 
force_config_drive
  2015-10-19 11:36:10.141 5085 TRACE nova

  The openstack kilo release has been deployed as an RDO all-in-one node
  without the demo data on a centos 7 platform

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1507516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-te

[Yahoo-eng-team] [Bug 1504184] Re: Glance does not error gracefully on token validation error

2015-10-19 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1504184

Title:
  Glance does not error gracefully on token validation error

Status in Glance:
  Fix Committed
Status in Glance kilo series:
  New
Status in Glance liberty series:
  In Progress

Bug description:
  When the registry has an error validating the token that the api has
  sent it a 500 is returned, rather than 401. This is with the latest
  master.

  {code}
  2015-10-08 15:03:16.939 ERROR glance.registry.client.v1.client 
[req-b561060e-d60c-4085-820d-1e87e64448ed 9f81b40c4b484be99a06754f32500271 
51852dcd7e304719939f29fc2c3f3558] Registry client request GET /images/detail 
raised NotAuthenticated
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client Traceback 
(most recent call last):
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client   File 
"/opt/stack/glance/glance/registry/client/v1/client.py", line 121, in do_request
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client **kwargs)
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client   File 
"/opt/stack/glance/glance/common/client.py", line 74, in wrapped
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client return 
func(self, *args, **kwargs)
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client   File 
"/opt/stack/glance/glance/common/client.py", line 375, in do_request
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client 
headers=copy.deepcopy(headers))
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client   File 
"/opt/stack/glance/glance/common/client.py", line 88, in wrapped
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client return 
func(self, method, url, body, headers)
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client   File 
"/opt/stack/glance/glance/common/client.py", line 517, in _do_request
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client raise 
exception.NotAuthenticated(res.read())
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client 
NotAuthenticated: Authentication required
  2015-10-08 15:03:16.939 TRACE glance.registry.client.v1.client 
  2015-10-08 15:03:16.940 ERROR glance.common.wsgi 
[req-b561060e-d60c-4085-820d-1e87e64448ed 9f81b40c4b484be99a06754f32500271 
51852dcd7e304719939f29fc2c3f3558] Caught error: Authentication required
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi Traceback (most recent call 
last):
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/wsgi.py", line 879, in __call__
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi request, **action_args)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/wsgi.py", line 907, in dispatch
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi return method(*args, 
**kwargs)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/api/v1/images.py", line 366, in detail
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi images = 
registry.get_images_detail(req.context, **params)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/registry/client/v1/api.py", line 161, in 
get_images_detail
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi return 
c.get_images_detailed(**kwargs)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/registry/client/v1/client.py", line 150, in 
get_images_detailed
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi res = 
self.do_request("GET", "/images/detail", params=params)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/registry/client/v1/client.py", line 136, in do_request
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi 'exc_name': exc_name})
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi six.reraise(self.type_, 
self.value, self.tb)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/registry/client/v1/client.py", line 121, in do_request
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi **kwargs)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/client.py", line 74, in wrapped
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi return func(self, *args, 
**kwargs)
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/client.py", line 375, in do_request
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi 
headers=copy.deepcopy(headers))
  2015-10-08 15:03:16.940 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/cli

[Yahoo-eng-team] [Bug 1507517] [NEW] When you create JobBinaries from JobTemplate, button "Choose" unavailable

2015-10-19 Thread Evgeny Sikachev
Public bug reported:

ENVIRONMENT: devstack(19 oct 2015)

PREPEARE STEP: If it possible, remove all JobBinaries

STEPS TO REPRODUCE:
1. Navigate to JobBinaries
2. Click on "Create JobTemplate"
3.  Navigate to "Libs" tab
4. Click on "+"
5. Create JobBinary and click "Create"

EXPECTED RESULT:
Button "Choose" available
Screenshot-1

ACTUAL RESULT:
Button "Choose" unavailable
Screenshot-2

** Affects: horizon
 Importance: Undecided
 Assignee: Vitaly Gridnev (vgridnev)
 Status: New


** Tags: sahara

** Attachment added: "Screenshot-1.png"
   
https://bugs.launchpad.net/bugs/1507517/+attachment/4500012/+files/Screenshot-1.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507517

Title:
  When you create JobBinaries from JobTemplate, button "Choose"
  unavailable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  ENVIRONMENT: devstack(19 oct 2015)

  PREPEARE STEP: If it possible, remove all JobBinaries

  STEPS TO REPRODUCE:
  1. Navigate to JobBinaries
  2. Click on "Create JobTemplate"
  3.  Navigate to "Libs" tab
  4. Click on "+"
  5. Create JobBinary and click "Create"

  EXPECTED RESULT:
  Button "Choose" available
  Screenshot-1

  ACTUAL RESULT:
  Button "Choose" unavailable
  Screenshot-2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507515] [NEW] wrong url in preview stack detail

2015-10-19 Thread Masco Kaliyamoorthy
Public bug reported:

In preview stack detail page, URL for the cancel button redirecting to
the 'access and security' table.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507515

Title:
  wrong url in preview stack detail

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In preview stack detail page, URL for the cancel button redirecting to
  the 'access and security' table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507499] [NEW] Centralized Management System for testing the environment

2015-10-19 Thread Kanchan Gupta
Public bug reported:

To provide the support for running connectivity tests between the vms of
a tenant, from a centralized management system so that troubleshooting
the environment becomes easier.

Problem Description
===
Currently there is no automated system which can manage and monitor the 
connectivity of the resources in a tenant's environment. At the moment, this is 
achieved by manually executing the connectivity tests. This blueprint proposes 
the automation of this management and monitoring process.

Proposed Change
===
- A new feature will be added which will allow the tenant admin user to test 
its environment (for eg. ping or tracert).
- This functionality will also be provided from openstack dashboard to drag and 
drop the vms to be tested.
- Only Admin user will be allowed to execute the testcases for the connectivity.

New APIs will be added to support the execution of the Connectivity
management tests via the centralized management system to allow cli
execution of the tests and also for integrating the functionality into
horizon.

Advantage
=
This change will allow operators/admin users to quickly check the status of all 
the VMs in the network and troubleshooting the problems.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507499

Title:
  Centralized Management System for testing the environment

Status in neutron:
  New

Bug description:
  To provide the support for running connectivity tests between the vms
  of a tenant, from a centralized management system so that
  troubleshooting the environment becomes easier.

  Problem Description
  ===
  Currently there is no automated system which can manage and monitor the 
connectivity of the resources in a tenant's environment. At the moment, this is 
achieved by manually executing the connectivity tests. This blueprint proposes 
the automation of this management and monitoring process.

  Proposed Change
  ===
  - A new feature will be added which will allow the tenant admin user to test 
its environment (for eg. ping or tracert).
  - This functionality will also be provided from openstack dashboard to drag 
and drop the vms to be tested.
  - Only Admin user will be allowed to execute the testcases for the 
connectivity.

  New APIs will be added to support the execution of the Connectivity
  management tests via the centralized management system to allow cli
  execution of the tests and also for integrating the functionality into
  horizon.

  Advantage
  =
  This change will allow operators/admin users to quickly check the status of 
all the VMs in the network and troubleshooting the problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507492] [NEW] manually schedule dhcp-agent doesn't check dhcp_agents_per_network

2015-10-19 Thread ZongKai LI
Public bug reported:

We can use dhcp-agent-network-add to manually schedule a net to dhcp-
agents. And when we manually schedule dhcp-agent, neutron code should
check configure dhcp_agents_per_network, to verify current manually
scheduling could be support or not.

Pre-conditions:
2 active dhcp-agents, agent-A and agent-B;
network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

steps:
directly run "neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID" without 
remove net-1 from agent-A first.

expected:
warning or error to tells that couldn't schedule net-1 onto agent-B, for 
numbers of agents hosting net-1 couldn't be larger than dhcp_agents_per_network 
value.

actual result:
running "neutron dhcp-agent-list-hosting-net NET-1-ID" will output that net-1 
is hosted by agent-A and agent-B now.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507492

Title:
  manually schedule dhcp-agent doesn't check dhcp_agents_per_network

Status in neutron:
  New

Bug description:
  We can use dhcp-agent-network-add to manually schedule a net to dhcp-
  agents. And when we manually schedule dhcp-agent, neutron code should
  check configure dhcp_agents_per_network, to verify current manually
  scheduling could be support or not.

  Pre-conditions:
  2 active dhcp-agents, agent-A and agent-B;
  network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
  set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

  steps:
  directly run "neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID" without 
remove net-1 from agent-A first.

  expected:
  warning or error to tells that couldn't schedule net-1 onto agent-B, for 
numbers of agents hosting net-1 couldn't be larger than dhcp_agents_per_network 
value.

  actual result:
  running "neutron dhcp-agent-list-hosting-net NET-1-ID" will output that net-1 
is hosted by agent-A and agent-B now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507493] [NEW] Adding a host to an aggregate will give a traceback if service is disabled

2015-10-19 Thread Robert van Leeuwen
Public bug reported:

When you disable the compute service of a host you cannot add it to an
aggregate.

To reproduce (tested on Kilo MOS packages on Ubuntu 14.04):

# nova-manage service disable compute01 nova-compute
# nova aggregate-add-host az1 compute01
ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

In the nova-api.log:

2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack Traceback (most recent 
call last):
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", 
line 634, in __call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return 
self._call_app(env, start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", 
line 554, in _call_app
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 756, in 
__call__
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack content_type, body, 
accept)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 821, in 
_process_stack
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 911, in 
dispatch
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return 
method(req=request, **action_args)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/aggregates.py",
 line 178, in action
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return 
_actions[action](req, id, data)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/aggregates.py",
 line 51, in wrapped
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack return fn(self, req, 
id, host, *args, **kwargs)
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/aggregates.py",
 line 188, in _add_host
2015-10-19 07:44:51.727 38561 TRACE nova.api.openstack aggregate = 
self.api.add_host_to_aggre

[Yahoo-eng-team] [Bug 1507489] [NEW] manually reschedule dhcp-agent doesn't update port binding

2015-10-19 Thread ZongKai LI
Public bug reported:

We can use dhcp-agent-network-add/remove to manually reschedule a net between 
dhcp-agents. And "neutron dhcp-agent-list-hosting-net" and ip or ps commands 
can be used to confirm network is rescheduled to new agent.
But dhcp port binding doesn't get updated in db site, we can use "neutron 
port-show" to find that, after network is rescheduled.

Pre-conditions:
2 active dhcp-agents, agent-A and agent-B;
network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
port-1 is dhcp port of net-1;
set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

steps:
neutron dhcp-agent-network-remove AGENT-A-ID NET-1-ID ; neutron port-show 
PORT-1-ID
[1]
neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID ; neutron port-show PORT-1-ID
[2]

expected:
[1]:
Field  Value
binding:host_id  EMPTY
binding:profile   {}
binding:vif_details  {}
binding:vif_type unbound
binding:vnic_type   normal
device_id   reserved_dhcp_port


[2]:
Field  Value
binding:host_id  AGENT-B-HOST-ID
binding:profile   {}
binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
binding:vif_type ovs
binding:vnic_type   normal
device_id   dhcpxxx(relate-to-agent-B-host)-NET-1-ID

Actual output:
[1]
Field  Value
binding:host_id  AGENT-A-HOST-ID
binding:profile   {}
binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
binding:vif_type ovs
binding:vnic_type   normal
device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID
[2]

Field  Value
binding:host_id  AGENT-A-HOST-ID
binding:profile   {}
binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
binding:vif_type ovs
binding:vnic_type   normal
device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Description changed:

  We can use dhcp-agent-network-add/remove to manually reschedule a net between 
dhcp-agents. And "neutron dhcp-agent-list-hosting-net" and ip or ps commands 
can be used to confirm network is rescheduled to new agent.
  But dhcp port binding doesn't get updated in db site, we can use "neutron 
port-show" to find that, after network is rescheduled.
  
  Pre-conditions:
  2 active dhcp-agents, agent-A and agent-B;
  network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
  port-1 is dhcp port of net-1;
  set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;
  
  steps:
  neutron dhcp-agent-network-remove AGENT-A-ID NET-1-ID ; neutron port-show 
PORT-1-ID
  [1]
  neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID ; neutron port-show 
PORT-1-ID
  [2]
  
  expected:
- [1]: 
+ [1]:
  Field  Value
  binding:host_id  EMPTY
  binding:profile   {}
  binding:vif_details  {}
  binding:vif_type unbound
  binding:vnic_type   normal
  device_id   reserved_dhcp_port
  
  
  [2]:
  Field  Value
  binding:host_id  AGENT-B-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-B-host)-NET-1-ID
  
  Actual output:
  [1]
  Field  Value
  binding:host_id  AGENT-A-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID
  [2]
  
  Field  Value
  binding:host_id  AGENT-A-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507489

Title:
  manually reschedule dhcp-agent doesn't update port binding

Status in neutron:
  New

Bug description:
  We can use dhcp-agent-network-add/remove to manually reschedule a net between 
dhcp-agents. And "neutron dhcp-agent-list-hosting-net" and ip or ps commands 
can be used to confirm network is rescheduled to new agent.
  But dhcp port binding doesn't get updated in db site, we can use "neutron 
port-show" to find that, after network is rescheduled.

  Pre-conditions:
  2 active dhcp-agents, agent-A and agent-B;
  network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
  port-1 is dhcp port of net-1;
  set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

  steps

[Yahoo-eng-team] [Bug 1498472] Re: lbaas:after 871 namespace was created for 1v1 mapping, the new LB pool with pending_create status

2015-10-19 Thread spark
don;t see this issue on kilo setup , so close it

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498472

Title:
  lbaas:after 871  namespace  was created for  1v1 mapping, the new LB
  pool with pending_create status

Status in neutron:
  Invalid

Bug description:
  after 817 tenants was create for namespace 1v1 maping, the new LB pool
  with pending_creat status

  test step
  1 create  a tenant
  2 create a pool network with subnet 10.1.1.0
    create a vip network with subnet 20.1.1.0
  3 create a router and add 2 interface on it for subnet 10 and subnet20
  4 create a LB pool with subnet 10
  5 add a VIP with vip address 20.1.1.100 on this pool with protocol HTTP and 
port 80
  6(option ) add one or two member
  7 repeat above step to continuous add 1v1 map( 1 tenant map 1 lb namespce) 
for scale test

  add to following tenant
  80 tenants  with 2 backend and 1 client (LB monitor enable )with simple 
traffic
  100 tenants  with 1 backend and 1 client (LB monitor enable) with simple 
traffic
  1067 tenants without member and monitor
  total of 1247  pools , total of 817 pools with active , total of 376 pools 
with error or pending-create status ) log in attachment
  the issue :after add 817 namespace , created new lb pool with pending-status 
or error status

  after check log,  AMQP can't connect to server ,
  and then reset rabbitmq service on controller node " service rabbitmq-server 
restart"
  the setup can't recover , the new lb pool still with pending-create status

  for recover setup, start to delete tenants and lb-pool and vip ( 1v1 maping)
  after delete tenant to 328(remain), the setup recover , new lb pool can be 
create with active status , so it was a scale issue , and file this bug for 
trace this issue

  log

  [root@nsj17 ~]# tail -n 200 /var/log/neutron/lbaas-agent.log
  2015-08-11 20:35:51.939 2581 TRACE oslo.messaging._drivers.impl_rabbit
  2015-08-11 20:35:51.944 2581 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 1.0 seconds...
  2015-08-11 20:35:51.970 2581 ERROR oslo.messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - broker 
forced connection closure with reason 'shutdown'
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
655, in ensure
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
return method()
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
735, in _consume
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
return self.connection.drain_events(timeout=timeout)
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 281, in 
drain_events
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
return self.transport.drain_events(self.connection, **kwargs)
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 94, in 
drain_events
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
return connection.drain_events(**kwargs)
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 320, in drain_events
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
return amqp_method(channel, args)
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 526, in _close
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
(class_id, method_id), ConnectionError)
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit 
ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection 
closure with reason 'shutdown'
  2015-08-11 20:35:51.970 2581 TRACE oslo.messaging._drivers.impl_rabbit
  2015-08-11 20:35:51.972 2581 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 1.0 seconds...
  2015-08-11 20:35:52.946 2581 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on 10.53.87.192:5672
  2015-08-11 20:35:52.971 2581 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on 10.53.87.192:5672 is unreachable: [Errno 111] ECONNREFUSED. 
Trying again in 1 seconds.
  2015-08-11 20:35:52.973 2581 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server