[Yahoo-eng-team] [Bug 1603861] [NEW] wrong check condtion for revoke event

2016-07-17 Thread Dave Chen
Public bug reported:

Keystone has the code to prevent `None` value to be returned when list
revoke event, but there is wrong check condition that leads to
access_token_i with None returned to end user.

see code here.
https://github.com/openstack/keystone/blob/master/keystone/models/revoke_model.py#L114-L115

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603861

Title:
  wrong check condtion for revoke event

Status in OpenStack Identity (keystone):
  New

Bug description:
  Keystone has the code to prevent `None` value to be returned when list
  revoke event, but there is wrong check condition that leads to
  access_token_i with None returned to end user.

  see code here.
  
https://github.com/openstack/keystone/blob/master/keystone/models/revoke_model.py#L114-L115

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1603861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603859] [NEW] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

2016-07-17 Thread Rabi Mishra
Public bug reported:

It seems the recent change[1] has broken the heat gate.

The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver could
not be loaded(used by heat) as you can see in the log[2]

Heat tests fail with the following error, as it can't reach the lb url.

---
2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 



[1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
[2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603859

Title:
  Could not load
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Status in neutron:
  New

Bug description:
  It seems the recent change[1] has broken the heat gate.

  The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
  could not be loaded(used by heat) as you can see in the log[2]

  Heat tests fail with the following error, as it can't reach the lb
  url.

  ---
  2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
  2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
  2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
  2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
  2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
  2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
  2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
  2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
  2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
  2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
  2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 
  

  
  [1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
  [2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : h

[Yahoo-eng-team] [Bug 1603860] [NEW] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

2016-07-17 Thread Rabi Mishra
Public bug reported:

It seems the recent change[1] has broken the heat gate.

The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver could
not be loaded(used by heat) as you can see in the log[2]

Heat tests fail with the following error, as it can't reach the lb url.

---
2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 



[1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
[2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603860

Title:
  Could not load
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Status in heat:
  New
Status in neutron:
  New

Bug description:
  It seems the recent change[1] has broken the heat gate.

  The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
  could not be loaded(used by heat) as you can see in the log[2]

  Heat tests fail with the following error, as it can't reach the lb
  url.

  ---
  2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
  2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
  2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
  2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
  2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
  2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
  2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
  2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
  2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
  2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
  2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 
  

  
  [1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
  [2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1603860/+subscriptions

-- 
Mailing list: ht

[Yahoo-eng-team] [Bug 1603850] Re: Can not remote debug on windows

2016-07-17 Thread Brandon Logan
have you enabled gevent debugging in pycharm?  Also, I assume you are
running pycharm in windows but neutron-server or the agents you want to
debug are running on CentOS7.  In which case you are probably adding a
remote interpreter in pycharm.  So gevent debugging will probably solve
this for you.  Pycharm did have an issue with gevent debugging in
previous versions.  However, the latest version has these fixed.  I'm
marking this invalid as I believe this to be a pycharm issue and not a
neutron issue.

https://blog.jetbrains.com/pycharm/2012/08/gevent-debug-support/

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603850

Title:
  Can not remote debug on windows

Status in neutron:
  Invalid

Bug description:
  The neutron runs on CentOS7 and my development environment is Windows
  10. The IDE I use is PyCharm 5.

  To remote debug neutron with Pydevd module, you need to change the
  code of neutron/common/eventlet_utils.py

  from

  eventlet.monkey_patch

  to

  eventlet.monkey_patch(os=False, thread=False)

  But this will cause a lot issues. For example:

  The l3 agent and DHCP agent will not be able to report their
  status to Neutron API Server

  May be I should not do this change. But if you want to remote debug
  neutron from Windows, you have to do this. So this should be a
  problem. Please consider it as a bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580443] Re: Unable to run tox

2016-07-17 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1580443

Title:
  Unable to run tox

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  When i run 'tox -epy27' in a Horizon test repository, it seems to get
  stuck after the following lines

  stack@stack-VirtualBox:~/horizon$ tox -epy27
  py27 create: /home/stack/horizon/.tox/py27
  py27 installdeps: -r/home/stack/horizon/requirements.txt, 
-r/home/stack/horizon/test-requirements.txt

  The same case with 'tox -epy34'
  stack@stack-VirtualBox:~/horizon$ tox -epy34
  py34 create: /home/stack/horizon/.tox/py34
  py34 installdeps: -r/home/stack/horizon/requirements.txt, 
-r/home/stack/horizon/test-requirements.txt

  How can i run these tests before submitting my patch?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1580443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582159] Re: OVS Mech: Set hybrid plug based on agent config

2016-07-17 Thread Launchpad Bug Tracker
[Expired for openstack-manuals because there has been no activity for 60
days.]

** Changed in: openstack-manuals
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582159

Title:
  OVS Mech: Set hybrid plug based on agent config

Status in neutron:
  Invalid
Status in openstack-manuals:
  Expired

Bug description:
  https://review.openstack.org/311814
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2f17a30ba04082889f3a703aca1884b031767942
  Author: Kevin Benton 
  Date:   Fri Apr 29 18:01:51 2016 -0700

  OVS Mech: Set hybrid plug based on agent config
  
  This adjusts the logic in the OVS mechanism driver to determine
  what the ovs_hybrid_plug value should be set to in the VIF details.
  Previously it was based purely on the firewall driver configured on
  the server side. This prevented a mixed environment where some agents
  might be running a native OVS firewall driver while others are still
  based on the IPTables hybrid driver.
  
  This patch has the OVS agents report back whether they want hybrid
  plugging in their configuration dictionary sent during report_state.
  The OVS agent sets this based on an explicit attribute on the firewall
  driver requesting OVS hybrid plugging.
  
  To maintain backward compat, if an agent doesn't report this, the old
  logic of basing it off of the server-side config is applied.
  
  DocImpact: The server no longer needs to be configured with a firewall
 driver for OVS. It will read config from agent state reports.
  Closes-Bug: #1560957
  Change-Id: Ie554c2d37ce036e7b51818048153b466eee02913

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582602] Re: part of external pid files left when ha router is deleted

2016-07-17 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582602

Title:
  part of external pid files left when ha router is deleted

Status in neutron:
  Expired

Bug description:
  Env:
  Juno 2014.2.4
  3 controller, 3 compute, 2 network(l3 agent)

  Description:
  In my Juno environment, I created a few HA routers(about 20), there would be 
some files generated in /var/lib/neutron/external and /var/lib/neutron/ha_confs 
to record keepalived/metadata proxy pid, as expected.
  But when I deleted these HA routers, part of external pid files still left.

  The external file is created by notify_master.sh, which will be executed when 
L3 HA state transitions to master.
  When delete the HA router, function "disable" in external_process.py will 
remove this file, if the pid in it is active.
  So there will be a situation:
  (1) L3 HA state transitions to master.
  (2) external pid file is created.
  (3) L3 HA state transitions to backup.
  (4) external process will be killed.
  (5) delete L3 HA
  (6) external process is not active, do not remove the pid file.
  Process for 76d8414d-3902-4511-a732-62a4759251a5 pid 21193 is stale, 
ignoring signal 9 disable 
/usr/lib/python2.7/site-packages/neutron/agent/linux/external_process.py:115
  (7) the external pid file is left.

  I am confused whether this logic(stale process ignore) is expected, or
  we should remove the external pid file whenever delete the HA router.

  By the way, the file lock for neutron-iptables in
  /var/lib/neutron/lock is left too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603850] [NEW] Can not remote debug on windows

2016-07-17 Thread kramer
Public bug reported:

The neutron runs on CentOS7 and my development environment is Windows
10. The IDE I use is PyCharm 5.

To remote debug neutron with Pydevd module, you need to change the code
of neutron/common/eventlet_utils.py

from

eventlet.monkey_patch

to

eventlet.monkey_patch(os=False, thread=False)

But this will cause a lot issues. For example:

The l3 agent and DHCP agent will not be able to report their status
to Neutron API Server

May be I should not do this change. But if you want to remote debug
neutron from Windows, you have to do this. So this should be a problem.
Please consider it as a bug

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603850

Title:
  Can not remote debug on windows

Status in neutron:
  New

Bug description:
  The neutron runs on CentOS7 and my development environment is Windows
  10. The IDE I use is PyCharm 5.

  To remote debug neutron with Pydevd module, you need to change the
  code of neutron/common/eventlet_utils.py

  from

  eventlet.monkey_patch

  to

  eventlet.monkey_patch(os=False, thread=False)

  But this will cause a lot issues. For example:

  The l3 agent and DHCP agent will not be able to report their
  status to Neutron API Server

  May be I should not do this change. But if you want to remote debug
  neutron from Windows, you have to do this. So this should be a
  problem. Please consider it as a bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503182] Re: 'edit flavor' and 'modify access' both opening the same flavor access tab

2016-07-17 Thread Richard Jones
Cannot reproduce this bug by following the instructions in the OP.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1503182

Title:
  'edit flavor' and 'modify access' both opening the same flavor access
  tab

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In flavors table, while clicking the 'edit flavor' and 'modify access' 
actions, both are opening the 'access control' tab.
  this is wrong behavior.
  'edit flavor' should open 'flavor information' tab

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1503182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603833] [NEW] If we need a host filter in neutron ?

2016-07-17 Thread QunyingRan
Public bug reported:

There is no host filter in neutron to ensure resources available for
network when booting instance。 Now, when booting instance, nova will
select one host and send message to neutron agent, but the agent is
inactive actually. For bandwith, it's limited in physical interfaces, if
instance ports which had bandwidth-limited-rule and  needed bandwidth
resource more than that physical interfaces have, the qos policy will be
invalid. So if we need to add a host filter in neutron, when creating a
instance nova send a message to neutron to get available hosts for
network resource and then boot instance.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603833

Title:
  If we need a host filter in neutron ?

Status in neutron:
  New

Bug description:
  There is no host filter in neutron to ensure resources available for
  network when booting instance。 Now, when booting instance, nova will
  select one host and send message to neutron agent, but the agent is
  inactive actually. For bandwith, it's limited in physical interfaces,
  if instance ports which had bandwidth-limited-rule and  needed
  bandwidth resource more than that physical interfaces have, the qos
  policy will be invalid. So if we need to add a host filter in neutron,
  when creating a instance nova send a message to neutron to get
  available hosts for network resource and then boot instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603830] [NEW] Azure data source cannot generate public ssh key

2016-07-17 Thread Ian Duffy
Public bug reported:

Given the following code on a EL based distribution:

```
def crtfile_to_pubkey(fname):
pipeline = ('openssl x509 -noout -pubkey < "$0" |'
'ssh-keygen -i -m PKCS8 -f /dev/stdin')
(out, _err) = util.subp(['sh', '-c', pipeline, fname], capture=True)
return out.rstrip()
```

Cloud-init is unable to generate a ssh public-key from the azure PKCS8 
certificate.
The version of ssh-keygen on EL distributions does not have a -m flag.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1603830

Title:
  Azure data source cannot generate public ssh key

Status in cloud-init:
  New

Bug description:
  Given the following code on a EL based distribution:

  ```
  def crtfile_to_pubkey(fname):
  pipeline = ('openssl x509 -noout -pubkey < "$0" |'
  'ssh-keygen -i -m PKCS8 -f /dev/stdin')
  (out, _err) = util.subp(['sh', '-c', pipeline, fname], capture=True)
  return out.rstrip()
  ```

  Cloud-init is unable to generate a ssh public-key from the azure PKCS8 
certificate.
  The version of ssh-keygen on EL distributions does not have a -m flag.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1603830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568150] Re: xenial lxc containers not starting

2016-07-17 Thread Anastasia
** Changed in: juju-core
   Status: Incomplete => Won't Fix

** Changed in: juju-core
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1568150

Title:
  xenial lxc containers not starting

Status in cloud-init:
  Fix Committed
Status in juju-core:
  Invalid
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  When deploying a xenial lxc container to a xenial host, the container
  fails during cloud-init with the following error in the container's
  /var/log/cloud-init-output.log:

  2016-04-08 21:07:05,190 - util.py[WARNING]: failed of stage init-local
  failed run of stage init-local
  
  Traceback (most recent call last):
File "/usr/bin/cloud-init", line 515, in status_wrapper
  ret = functor(name, args)
File "/usr/bin/cloud-init", line 250, in main_init
  init.fetch(existing=existing)
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 318, in 
fetch
  return self._get_data_source(existing=existing)
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 227, in 
_get_data_source
  ds.check_instance_id(self.cfg)):
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceNoCloud.py", line 
220, in check_instance_id
  dirs=self.seed_dirs)
  AttributeError: 'DataSourceNoCloudNet' object has no attribute 'seed_dirs'

  Trusty containers start just fine.

  Using juju 1.25.5 and MAAS 1.9.2

  Commands to reproduce:

  juju bootstrap --constraints "tags=juju" --upload-tools --show-log --debug
  juju set-constraints "tags="
  juju add-machine --series xenial
  juju deploy --to lxc:1 local:xenial/ubuntu ubuntu

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1568150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600697] Re: Using VMware NSXv driver, when update the port with specified address pair, got exception

2016-07-17 Thread Gary Kotton
** Project changed: neutron => vmware-nsx

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600697

Title:
  Using VMware NSXv driver, when update the port with specified address
  pair, got exception

Status in vmware-nsx:
  New

Bug description:
  yangyubj@yangyubj-virtual-machine:~$ neutron port-update 
26d1a5e7-a745-4f6a-b965-bb33709a8a23 --allowed-address-pair 
ip_address=10.0.0.0/8
  Request 
https://10.155.20.92/api/4.0/services/spoofguard/spoofguardpolicy-9?action=approve
 is Bad, response 
  The value 10.0.0.0/8 for vm 
(01985390-cf9a-4cf7-bad3-2f164edc45d9) - Network adapter 1 is invalid. Valid 
value should be 
{2}220core-services
  Neutron server returns request_ids: 
['req-98cc6835-a2a2-43b3-bde4-b6aa47e313e1']

  
  The root cause is the NSXv can not support cidr 10.0.0.0/8

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1600697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556260] Re: cloud-init fails to grow disk on physical servers with multipath.

2016-07-17 Thread Mathew Hodson
** Package changed: centos => cloud-init (Ubuntu)

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1556260

Title:
  cloud-init fails to grow disk on physical servers with multipath.

Status in cloud-init:
  Triaged
Status in cloud-init package in Ubuntu:
  New

Bug description:
  for provisioning ubuntu trusty on  baremetal, cloud-init fails to grow root 
partition,  below is the error.
  physical server is  HP bl460c blade , booting from SAN and using  multipath ,

  
   - cc_growpart.py[DEBUG]: No 'growpart' entry in cfg.  Using default: 
{'ignore_growroot_disabled': False, 'mode': 'auto', 'devices': ['/']}
   - util.py[DEBUG]: Running command ['growpart', '--help'] with allowed return 
codes [0] (shell=False, capture=True)
   - util.py[DEBUG]: Reading from /proc/1005/mountinfo (quiet=False)
  - util.py[DEBUG]: Read 1121 bytes from /proc/1005/mountinfo
  - util.py[DEBUG]: resize_devices took 0.003 seconds
   - cc_growpart.py[DEBUG]: '/' SKIPPED: 
device_part_info(/dev/disk/by-label/cloudimg-rootfs) failed: 
/dev/disk/by-label/cloudimg-rootfs not a partition
  

  upon checking seems like  cc_growpart.py script is relying on
  "/sys/class/block/device-name/partition" file to check number of
  partitions.

  but for dm multipath devices , there is no such "partition" attribute
  exists, hence script error out saying "not a partition".

  
  I do not have any solution for now,  but trying to find how to map  multipath 
device like  /dev/dm-1  to  it's real scsi disk /dev/sda1  .. if that's figured 
out then we can put some logic to say if  base name is  dm-x  then find mapped 
sdx  and use that device to find a partition etc.

  is this just me or someone else also has this issue with cloud-init
  failing on multipath disks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1556260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470341] Re: Cannot remove host from aggregate if host has been deleted

2016-07-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/306192
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1abac25fd2967be00c4d584c06ba15ca4e8d04cc
Submitter: Jenkins
Branch:master

commit 1abac25fd2967be00c4d584c06ba15ca4e8d04cc
Author: Danil Akhmetov 
Date:   Fri Apr 29 11:45:28 2016 +0300

Remove compute host from all host aggregates when compute service is deleted

Nova currently does not check if compute host included in host-aggregates
when user deletes compute service. It leads to inconsistency in nova host
aggregates, impossibility to remove compute host from host-aggregate or
remove host aggregate with invalid compute host.

Change-Id: I8034da3827e47f3cd575e1f6ddf0e4be2f7dfecd
Closes-Bug: #1470341


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470341

Title:
  Cannot remove host from aggregate if host has been deleted

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Kilo code

  Reproduce steps:

  1. Assuming that we have one nova-compute node named 'icm' which is
  added into one aggregate named 'zhaoqin'

  [root@icm ~]# nova aggregate-details zhaoqin
  ++-+---+---++
  | Id | Name| Availability Zone | Hosts | Metadata   |
  ++-+---+---++
  | 1  | zhaoqin | zhaoqin-az| 'icm' | 'availability_zone=zhaoqin-az' |
  ++-+---+---++
  [root@icm ~]# nova service-list
  
++--+--++-+---++-+
  | Id | Binary   | Host | Zone   | Status  | State | Updated_at
 | Disabled Reason |
  
++--+--++-+---++-+
  | 1  | nova-conductor   | icm  | internal   | enabled | up| 
2015-06-30T14:04:25.828383 | -   |
  | 3  | nova-scheduler   | icm  | internal   | enabled | up| 
2015-06-30T14:04:24.525474 | -   |
  | 4  | nova-consoleauth | icm  | internal   | enabled | up| 
2015-06-30T14:04:24.640657 | -   |
  | 5  | nova-compute | icm  | zhaoqin-az | enabled | up| 
2015-06-30T14:04:19.865857 | -   |
  | 6  | nova-cert| icm  | internal   | enabled | up| 
2015-06-30T14:04:25.080046 | -   |
  
++--+--++-+---++-+

  
  2. Remove the nova-compute using service-delete command. However, the host is 
still in aggregate.

  [root@icm ~]# nova service-delete 5
  [root@icm ~]# nova service-list
  
++--+--+--+-+---++-+
  | Id | Binary   | Host | Zone | Status  | State | Updated_at  
   | Disabled Reason |
  
++--+--+--+-+---++-+
  | 1  | nova-conductor   | icm  | internal | enabled | up| 
2015-06-30T14:05:35.826699 | -   |
  | 3  | nova-scheduler   | icm  | internal | enabled | up| 
2015-06-30T14:05:34.524507 | -   |
  | 4  | nova-consoleauth | icm  | internal | enabled | up| 
2015-06-30T14:05:34.638234 | -   |
  | 6  | nova-cert| icm  | internal | enabled | up| 
2015-06-30T14:05:35.092009 | -   |
  
++--+--+--+-+---++-+
  [root@icm ~]# nova aggregate-details zhaoqin
  ++-+---+---++
  | Id | Name| Availability Zone | Hosts | Metadata   |
  ++-+---+---++
  | 1  | zhaoqin | zhaoqin-az| 'icm' | 'availability_zone=zhaoqin-az' |
  ++-+---+---++

  
  3. Then, attempt to remove the host from aggregate, but fails. And we can not 
remove this aggregate either, because it is not empty.

  [root@icm ~]# nova aggregate-remove-host zhaoqin icm
  ERROR (NotFound): Cannot remove host icm in aggregate 1: not found (HTTP 404) 
(Request-ID: req-b5024dbf-156a-44ee-b48e-fc53a331e05d)
  [root@icm ~]# nova aggregate-delete zhaoqin
  ERROR (BadRequest): Cannot remove host from aggregate 1. Reason: Host 
aggregate is not empty. (HTTP 400) (Request-ID: 
req-a3c5346c-9a96-49f4-a76d-a7baa768a0ef)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470341/+subscriptions

-- 
Mailing list: https

[Yahoo-eng-team] [Bug 1595587] Re: Cannot save HostMapping object

2016-07-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/332017
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=daad6c26ecf4e6c32e74d1db0510d40b820d612d
Submitter: Jenkins
Branch:master

commit daad6c26ecf4e6c32e74d1db0510d40b820d612d
Author: Andrey Volkov 
Date:   Tue Jun 21 11:47:28 2016 +0300

Fix host mapping saving

Changes return ability to use HostMapping.save method,
it was broken.

It fixes two issues:
- HostMapping._save_in_db got unexpected parameter
  (self.host instead of self object from the save call).
  It's clear.

- "sqlalchemy.orm.exc.DetachedInstanceError: Parent instance
   is not bound to a Session;"
  while trying to get the cell_mapping attribute on saved HostMapping
  instance.

  As HostMapping cannot be without cell_mapping, solution is to load
  cell_mapping attribute just after getting it from DB.
  To be consistent I used the code from _create_in_db method.

Closes-bug: 1595587
Change-Id: Ia2e427f5bd4ab43d1c273de72ef7bb8c01d8d1af


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595587

Title:
  Cannot save HostMapping object

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  
https://github.com/openstack/nova/blob/master/nova/objects/host_mapping.py#L111-L129

  While I was looking at HostMapping object I found inconsistency in signature 
and call:
  HostMapping._save_in_db(context, obj, updates) uses objects attributes id and 
host.
  But then it is called from HostMapping.save second param is self.host which 
is not object
  but just a string.

  
  Existing test 
(nova.tests.unit.objects.test_host_mapping.TestHostMappingObject.test_save) 
doesn't catch this error because _save_in_db is mocked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1595587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603736] [NEW] Remove already deprecated ec2 api code

2016-07-17 Thread yatin
Public bug reported:

As per Sean Dague's below comment in review:-
https://review.openstack.org/#/c/279721 the unnecessary code is to be
removed:-

# NOTE(sdague): this whole file is safe to remove in Newton. We just
# needed a release cycle for it.

File: nova/api/ec2/__init__.py

** Affects: nova
 Importance: Undecided
 Assignee: yatin (yatinkarel)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603736

Title:
  Remove already deprecated ec2 api code

Status in OpenStack Compute (nova):
  New

Bug description:
  As per Sean Dague's below comment in review:-
  https://review.openstack.org/#/c/279721 the unnecessary code is to be
  removed:-

  # NOTE(sdague): this whole file is safe to remove in Newton. We just
  # needed a release cycle for it.

  File: nova/api/ec2/__init__.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603728] [NEW] search with error regex causes a 500 error

2016-07-17 Thread wanghongxu
Public bug reported:

Description
===
When I search instance with a wrong regx, such as '+', nova-api will return a 
500 error.

Steps to reproduce
==
nova list --name +

Expected result
===
After the execution of the steps above, We can get a prompt of regx error 
message.


Actual result
=
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-f6a78623-4db5-4f5b-94b1-8edb281b5e7a)

There are the nova-api.log:
 Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", 
line 478, in wrapped
 return f(*args, **kwargs)
   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
294, in detail
 servers = self._get_servers(req, is_detail=True)
   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
409, in _get_servers
 sort_keys=sort_keys, sort_dirs=sort_dirs)
   File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 2128, in 
get_all
 sort_keys=sort_keys, sort_dirs=sort_dirs)
   File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 2178, in 
_get_instances_by_filters
 expected_attrs=fields, sort_keys=sort_keys, sort_dirs=sort_dirs)
   File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
181, in wrapper
 result = fn(cls, context, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 1065, 
in get_by_filters
 use_slave=use_slave, sort_keys=sort_keys, sort_dirs=sort_dirs)
   File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 285, 
in wrapper
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 1049, 
in _get_by_filters_impl
 sort_keys=sort_keys, sort_dirs=sort_dirs)
   File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 734, in 
instance_get_all_by_filters_sort
 sort_dirs=sort_dirs)
   File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 229, 
in wrapper
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 330, 
in wrapped
 return f(context, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
2308, in instance_get_all_by_filters_sort
 return _instances_fill_metadata(context, query_prefix.all(), manual_joins)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 
2588, in all
 return list(self)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 
2736, in __iter__
 return self._execute_and_instances(context)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 
2751, in _execute_and_instances
 result = conn.execute(querycontext.statement, self._params)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
914, in execute
 return meth(self, multiparams, params)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 
323, in _execute_on_connection
 return connection._execute_clauseelement(self, multiparams, params)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1010, in _execute_clauseelement
 compiled_sql, distilled_params
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1146, in _execute_context
 context)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1337, in _handle_dbapi_exception
 util.raise_from_cause(newraise, exc_info)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 
200, in raise_from_cause
 reraise(type(exception), exception, tb=exc_tb)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1139, in _execute_context
 context)
   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
 cursor.execute(statement, parameters)
   File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 146, in 
execute
 result = self._query(query)
   File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 296, in 
_query
 conn.query(q)
   File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 781, in 
query
 self._affected_rows = self._read_query_result(unbuffered=unbuffered)
   File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 942, in 
_read_query_result
 result.read()
   File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1138, 
in read
 first_packet = self.connection._read_packet()
   File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 906, in 
_read_packet
 packet.check_error()
   File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 367, in 
check_error
 err.raise_mys