[Yahoo-eng-team] [Bug 1877254] [NEW] neutron agent. list API lacks sort and page feature

2020-05-07 Thread yong sheng gong
Public bug reported:

We have hundreds of neutron agents deployed, but currently, the agent
list API does not support sorting and paging, which is very bad.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1877254

Title:
  neutron agent. list API lacks sort and page feature

Status in neutron:
  New

Bug description:
  We have hundreds of neutron agents deployed, but currently, the agent
  list API does not support sorting and paging, which is very bad.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1877254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728642] [NEW] corrupted namespace blasted ovs bridge with thousands of dangling port

2017-10-30 Thread yong sheng gong
Public bug reported:

when dhcp namespace is corrupted somehow, ovs bridge will be blasted
with thousands of dangling ports which are created by dhcp agent.

the corrupted namespace will cause following exception:

2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
[req-db1e4f25-2263-49e9-ba5b-308ea9ccfdec - - - - -] Unable to plug DHCP port 
for network 0c59667a-433a-4e97-9568-07ee6210c98b. Releasing port.
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp Traceback (most recent 
call last):
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 1407, in setup
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp self.plug(network, 
port, interface_name)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 1375, in plug
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
mtu=network.get('mtu'))
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/interface.py",
 line 268, in plug
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp bridge, namespace, 
prefix, mtu)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/interface.py",
 line 389, in plug_new
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
namespace_obj.add_device_to_namespace(ns_dev)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 232, in add_device_to_namespace
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
device.link.set_netns(self.namespace)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 516, in set_netns
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp self._as_root([], 
('set', self.name, 'netns', namespace))
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 364, in _as_root
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
use_root_namespace=use_root_namespace)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 100, in _as_root
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
log_fail_as_error=self.log_fail_as_error)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 109, in _execute
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
log_fail_as_error=log_fail_as_error)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py", 
line 156, in execute
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp raise 
ProcessExecutionError(msg, returncode=returncode)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp ProcessExecutionError: 
Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Invalid argument
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp
2017-10-30 14:12:35.479 6 ERROR neutron.agent.linux.utils 
[req-29d446ad-eed5-47a0-bfc7-496dad2d35f2 - - - - -] Exit code: 2; Stdin: ; 
Stdout: ; Stderr: RTNETLINK answers: Invalid argument

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728642

Title:
  corrupted namespace blasted ovs bridge with thousands of dangling port

Status in neutron:
  In Progress

Bug description:
  when dhcp namespace is corrupted somehow, ovs bridge will be blasted
  with thousands of dangling ports which are created by dhcp agent.

  the corrupted namespace will cause following exception:

  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
[req-db1e4f25-2263-49e9-ba5b-308ea9ccfdec - - - - -] Unable to plug DHCP port 
for network 0c59667a-433a-4e97-9568-07ee6210c98b. Releasing port.
  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp Traceback (most 
recent call last):
  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 1407, in setup
  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
self.plug(network, port, interface_na

[Yahoo-eng-team] [Bug 1716746] Re: functional job broken by new os-testr

2017-09-17 Thread yong sheng gong
** Also affects: tacker
   Importance: Undecided
   Status: New

** Changed in: tacker
   Importance: Undecided => Critical

** Changed in: tacker
 Assignee: (unassigned) => yong sheng gong (gongysh)

** Changed in: tacker
Milestone: None => queens-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716746

Title:
  functional job broken by new os-testr

Status in networking-bgpvpn:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in tacker:
  In Progress

Bug description:
  functional job fails with:

  2017-09-12 16:09:20.705975 | 2017-09-12 16:09:20.705 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L67:   
testr_exit_code=0
  2017-09-12 16:09:20.707372 | 2017-09-12 16:09:20.706 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L68:   set 
-e
  2017-09-12 16:09:20.718005 | 2017-09-12 16:09:20.717 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L71:   
generate_testr_results
  2017-09-12 16:09:20.719619 | 2017-09-12 16:09:20.719 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L12:
   sudo -H -u stack chmod o+rw .
  2017-09-12 16:09:20.720974 | 2017-09-12 16:09:20.720 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L13:
   sudo -H -u stack chmod o+rw -R .testrepository
  2017-09-12 16:09:20.722284 | 2017-09-12 16:09:20.721 | chmod: cannot access 
'.testrepository': No such file or directory

  This is because new os-testr switched to stestr that has a different
  name for the directory (.stestr).

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1716746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677469] Re: networking-sfc is breaking tacker CI

2017-03-30 Thread yong sheng gong
** Also affects: networking-sfc
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677469

Title:
  networking-sfc is breaking tacker CI

Status in networking-sfc:
  New
Status in neutron:
  New

Bug description:
  http://logs.openstack.org/44/448844/6/check/gate-tacker-dsvm-
  functional-ubuntu-xenial-nv/31f9ef1/logs/screen-q-agt.txt.gz

  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-8948a445-84d5-4cd1-8084-551b7b135dcf - -] Agent main thread died of an 
exception
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
Traceback (most recent call last):
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 40, in agent_main_wrapper
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
ovs_agent.main(bridge_classes)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2168, in main
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 208, in __init__
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.init_extension_manager(self.connection)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 153, in 
wrapper
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return f(*args, **kwargs)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 393, in init_extension_manager
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.agent_api)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/agent/agent_extensions_manager.py", line 55, in 
initialize
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
extension.obj.initialize(connection, driver_type)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/sfc.py",
 line 82, in initialize
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.sfc_driver.initialize()
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/openvswitch/sfc_driver.py",
 line 96, in initialize
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self._clear_sfc_flow_on_int_br()
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/openvswitch/sfc_driver.py",
 line 171, in _clear_sfc_flow_on_int_br
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.br_int.delete_group(group_id='all')
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
AttributeError: 'OVSIntegrationBridge' object has no attribute 'delete_group'
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-sfc/+bug/1677469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677469] [NEW] networking-sfc is breaking tacker CI

2017-03-30 Thread yong sheng gong
Public bug reported:

http://logs.openstack.org/44/448844/6/check/gate-tacker-dsvm-functional-
ubuntu-xenial-nv/31f9ef1/logs/screen-q-agt.txt.gz

2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-8948a445-84d5-4cd1-8084-551b7b135dcf - -] Agent main thread died of an 
exception
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
Traceback (most recent call last):
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 40, in agent_main_wrapper
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
ovs_agent.main(bridge_classes)
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2168, in main
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 208, in __init__
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.init_extension_manager(self.connection)
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 153, in 
wrapper
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return f(*args, **kwargs)
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 393, in init_extension_manager
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.agent_api)
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/agent/agent_extensions_manager.py", line 55, in 
initialize
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
extension.obj.initialize(connection, driver_type)
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/sfc.py",
 line 82, in initialize
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.sfc_driver.initialize()
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/openvswitch/sfc_driver.py",
 line 96, in initialize
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self._clear_sfc_flow_on_int_br()
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/openvswitch/sfc_driver.py",
 line 171, in _clear_sfc_flow_on_int_br
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.br_int.delete_group(group_id='all')
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
AttributeError: 'OVSIntegrationBridge' object has no attribute 'delete_group'
2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677469

Title:
  networking-sfc is breaking tacker CI

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/44/448844/6/check/gate-tacker-dsvm-
  functional-ubuntu-xenial-nv/31f9ef1/logs/screen-q-agt.txt.gz

  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-8948a445-84d5-4cd1-8084-551b7b135dcf - -] Agent main thread died of an 
exception
  2017-03-30 04:24:18.839 10667 ERROR 

[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3

2017-02-28 Thread yong sheng gong
** Changed in: tacker
Milestone: None => ocata-2

** Changed in: tacker
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280105

Title:
  urllib/urllib2  is incompatible for python 3

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in Fuel for OpenStack:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Committed
Status in neutron:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in refstack:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  In Progress
Status in Zuul:
  In Progress

Bug description:
  urllib/urllib2  is incompatible for python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668141] [NEW] provide API for admin user to show neutron configure

2017-02-26 Thread yong sheng gong
Public bug reported:

problem:
physical networks are defined in configure file of neutron, admin does not have 
way to show the system configure.
some upper apps need these information. for example horizon allows the admin to 
create a network for target project, but horizon cannot fill in the physical 
network select box with neutron supported physical networks, cannot fill in the 
supported network type supported in neutron either.

expectation:
provide API to get neutron server configuration, return the json.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668141

Title:
  provide API for admin user to show neutron configure

Status in neutron:
  New

Bug description:
  problem:
  physical networks are defined in configure file of neutron, admin does not 
have way to show the system configure.
  some upper apps need these information. for example horizon allows the admin 
to create a network for target project, but horizon cannot fill in the physical 
network select box with neutron supported physical networks, cannot fill in the 
supported network type supported in neutron either.

  expectation:
  provide API to get neutron server configuration, return the json.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1668141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668125] [NEW] Add ability to change default quota

2017-02-26 Thread yong sheng gong
Public bug reported:

Problem:
Currently, the default quota is defined in configure file of neutron server: 
neutron.conf.
admin isnot able to change it via API.

other service nova, cinder can do it by, for example 'nova quota-class-
update --instances -1 --cores -1 default'

expectation:
implement an API to let admin change the default quota. By client, it maybe:
neutron quota-class-update --ports xx --networks yy --security-groups --zz 
default

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668125

Title:
  Add ability to change default quota

Status in neutron:
  New

Bug description:
  Problem:
  Currently, the default quota is defined in configure file of neutron server: 
neutron.conf.
  admin isnot able to change it via API.

  other service nova, cinder can do it by, for example 'nova quota-
  class-update --instances -1 --cores -1 default'

  expectation:
  implement an API to let admin change the default quota. By client, it maybe:
  neutron quota-class-update --ports xx --networks yy --security-groups --zz 
default

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1668125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651712] [NEW] failed to start VM on disabled port_security_enabled network

2016-12-21 Thread yong sheng gong
Public bug reported:

to start a VM on port_security_enabled disabled network, it failed:
016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
[req-ee15cc56-ef1d-4e25-889d-4634804fae57 ff5a300a13f846a08f47c08a3b14f162 
3d0d66439f3640c79007c0ea842f - - -] Instance failed network setup after 1 
attempt(s)
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager Traceback (most recent 
call last):
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
1397, in _allocate_network_async
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
bind_host_id=bind_host_id)
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 861, in allocate_for_instance
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager security_group_ids)
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 801, in _create_ports_for_instance
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager neutron, instance, 
created_port_ids)
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager self.force_reraise()
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 783, in _create_ports_for_instance
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager raise 
exception.SecurityGroupCannotBeApplied()
2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
SecurityGroupCannotBeApplied: Network requires port_security_enabled and subnet 
associated in order to apply security groups.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1651712

Title:
  failed to start VM on disabled port_security_enabled network

Status in OpenStack Compute (nova):
  New

Bug description:
  to start a VM on port_security_enabled disabled network, it failed:
  016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
[req-ee15cc56-ef1d-4e25-889d-4634804fae57 ff5a300a13f846a08f47c08a3b14f162 
3d0d66439f3640c79007c0ea842f - - -] Instance failed network setup after 1 
attempt(s)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager Traceback (most recent 
call last):
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
1397, in _allocate_network_async
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
bind_host_id=bind_host_id)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 861, in allocate_for_instance
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager security_group_ids)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 801, in _create_ports_for_instance
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager neutron, instance, 
created_port_ids)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager self.force_reraise()
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 783, in _create_ports_for_instance
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager raise 
exception.SecurityGroupCannotBeApplied()
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
SecurityGroupCannotBeApplied: Network requires port_security_enabled and subnet 
associated in order to apply security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1651712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1612186] [NEW] failed to create flavor router

2016-08-11 Thread yong sheng gong
d
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/base.py", line 510, in _create
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource obj = 
do_create(body)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/base.py", line 492, in do_create
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource request.context, 
reservation.reservation_id)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/base.py", line 485, in do_create
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource return 
obj_creator(request.context, **kwargs)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 472, in 
create_router
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
self).create_router(context, router)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/l3_db.py", line 1727, in create_router
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource router)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/l3_db.py", line 272, in create_router
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
transaction=False)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/common_db_mixin.py", line 66, in 
safe_creation
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource obj = create_fn()
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 78, in 
_create_router_db
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource context, router, 
tenant_id)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/l3_db.py", line 253, in 
_create_router_db
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
router_id=router['id'], router_db=router_db)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/callbacks/registry.py", line 44, in notify
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
_get_callback_manager().notify(resource, event, trigger, **kwargs)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/api.py", line 89, in wrapped
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource raise 
db_exc.RetryRequest(e)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/api.py", line 84, in wrapped
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/callbacks/manager.py", line 130, in notify
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource raise 
exceptions.CallbackFailure(errors=errors)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource CallbackFailure: 
Callback 
neutron.services.l3_router.service_providers.driver_controller.DriverController._set_router_provider
 failed with "can't set attribute"
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 


solution:
https://github.com/openstack/neutron/blob/master/neutron/services/l3_router/service_providers/driver_controller.py#L67
it should be: self._flavor_plugin_ref

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In

[Yahoo-eng-team] [Bug 1612183] [NEW] l3 router's driver controller is using wrong invalid exception

2016-08-11 Thread yong sheng gong
Public bug reported:

many places  at driver_controller are using wrong Invalid exception which has 
been moved to neutron_lib. For example:
https://github.com/openstack/neutron/blob/master/neutron/services/l3_router/service_providers/driver_controller.py#L113

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612183

Title:
  l3 router's driver controller is using wrong invalid exception

Status in neutron:
  In Progress

Bug description:
  many places  at driver_controller are using wrong Invalid exception which has 
been moved to neutron_lib. For example:
  
https://github.com/openstack/neutron/blob/master/neutron/services/l3_router/service_providers/driver_controller.py#L113

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611626] [NEW] neutron port-list consumes much longer for normal tenant user than admin role user

2016-08-10 Thread yong sheng gong
Public bug reported:

I have a neutron deployment where there are just 300 ports.
for admin user to run neutron port-list, it just took 2 secs, but for normal 
tenant user, it took more than 10+ secs.

I examined the API codes, thought it is the authorisation check that
makes the difference.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611626

Title:
  neutron port-list consumes much longer for normal tenant user than
  admin role user

Status in neutron:
  New

Bug description:
  I have a neutron deployment where there are just 300 ports.
  for admin user to run neutron port-list, it just took 2 secs, but for normal 
tenant user, it took more than 10+ secs.

  I examined the API codes, thought it is the authorisation check that
  makes the difference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591431] [NEW] openstack/common/cache should be removed

2016-06-10 Thread yong sheng gong
Public bug reported:

since oslo cache is used, we are able to remove openstack/common/cache
now

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591431

Title:
  openstack/common/cache should be removed

Status in neutron:
  Invalid

Bug description:
  since oslo cache is used, we are able to remove openstack/common/cache
  now

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586731] [NEW] restart neutron ovs agent will leave the fanout queue behind

2016-05-29 Thread yong sheng gong
Public bug reported:

to reproduce,
sudo rabbitmqctl list_queues
restart neutron-openvswitch-agent
sudo rabbitmqctl list_queues


q-agent-notifier-dvr-update 0
q-agent-notifier-dvr-update.ubuntu640
q-agent-notifier-dvr-update_fanout_714f4e99b33a4a41863406fcc26b9162 0
q-agent-notifier-dvr-update_fanout_a2771eb21e914195b9a6cc3f930b5afb 0
q-agent-notifier-l2population-update0
q-agent-notifier-l2population-update.ubuntu64   0
q-agent-notifier-l2population-update_fanout_6b2637e57995416ab772259a974315e0
3
q-agent-notifier-l2population-update_fanout_fe9c07aaa8894f55bfb49717f955aa550
q-agent-notifier-network-update 0
q-agent-notifier-network-update.ubuntu640
q-agent-notifier-network-update_fanout_1ae903109fe844a39c925e49d5f06498 0
q-agent-notifier-network-update_fanout_8c15bef355c645e58226a9b98efe3f28 0
q-agent-notifier-port-delete0
q-agent-notifier-port-delete.ubuntu64   0
q-agent-notifier-port-delete_fanout_cd794c4456cc4bedb7993f5d32f0b1b90
q-agent-notifier-port-delete_fanout_f09ffae3b0fa48c882eddd59baae21690
q-agent-notifier-port-update0
q-agent-notifier-port-update.ubuntu64   0
q-agent-notifier-port-update_fanout_776b9b5b1d0244fc8ddc0a1e309d9ab20
q-agent-notifier-port-update_fanout_f3345013434545fd9b72b7f54a5c98180
q-agent-notifier-security_group-update  0
q-agent-notifier-security_group-update.ubuntu64 0
q-agent-notifier-security_group-update_fanout_b5421c8ae5e94c318502ee8fbc62852d  0
q-agent-notifier-security_group-update_fanout_f4d73a80c9a9444c8a9899cbda3e71ed  0
q-agent-notifier-tunnel-delete  0
q-agent-notifier-tunnel-delete.ubuntu64 0
q-agent-notifier-tunnel-delete_fanout_743b58241f6243c0a776a0dbf58da652  0
q-agent-notifier-tunnel-delete_fanout_ddb8fad952b348a8bf12bc5c741d0a25  0
q-agent-notifier-tunnel-update  0
q-agent-notifier-tunnel-update.ubuntu64 0
q-agent-notifier-tunnel-update_fanout_1e0b0f7ca63f404ba5f41def9d12f00d  0
q-agent-notifier-tunnel-update_fanout_e86e9b073ec74766b9e755439827badc  1

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586731

Title:
  restart neutron ovs agent will leave the fanout queue behind

Status in neutron:
  New

Bug description:
  to reproduce,
  sudo rabbitmqctl list_queues
  restart neutron-openvswitch-agent
  sudo rabbitmqctl list_queues

  
  q-agent-notifier-dvr-update   0
  q-agent-notifier-dvr-update.ubuntu64  0
  q-agent-notifier-dvr-update_fanout_714f4e99b33a4a41863406fcc26b9162   0
  q-agent-notifier-dvr-update_fanout_a2771eb21e914195b9a6cc3f930b5afb   0
  q-agent-notifier-l2population-update  0
  q-agent-notifier-l2population-update.ubuntu64 0
  q-agent-notifier-l2population-update_fanout_6b2637e57995416ab772259a974315e0  
3
  q-agent-notifier-l2population-update_fanout_fe9c07aaa8894f55bfb49717f955aa55  0
  q-agent-notifier-network-update   0
  q-agent-notifier-network-update.ubuntu64  0
  q-agent-notifier-network-update_fanout_1ae903109fe844a39c925e49d5f06498   0
  q-agent-notifier-network-update_fanout_8c15bef355c645e58226a9b98efe3f28   0
  q-agent-notifier-port-delete  0
  q-agent-notifier-port-delete.ubuntu64 0
  q-agent-notifier-port-delete_fanout_cd794c4456cc4bedb7993f5d32f0b1b9  0
  q-agent-notifier-port-delete_fanout_f09ffae3b0fa48c882eddd59baae2169  0
  q-agent-notifier-port-update  0
  q-agent-notifier-port-update.ubuntu64 0
  q-agent-notifier-port-update_fanout_776b9b5b1d0244fc8ddc0a1e309d9ab2  0
  q-agent-notifier-port-update_fanout_f3345013434545fd9b72b7f54a5c9818  0
  q-agent-notifier-security_group-update0
  q-agent-notifier-security_group-update.ubuntu64   0
  
q-agent-notifier-security_group-update_fanout_b5421c8ae5e94c318502ee8fbc62852d  
  0
  
q-agent-notifier-security_group-update_fanout_f4d73a80c9a9444c8a9899cbda3e71ed  
  0
  q-agent-notifier-tunnel-delete0
  q-agent-notifier-tunnel-delete.ubuntu64   0
  q-agent-notifier-tunnel-delete_fanout_743b58241f6243c0a776a0dbf58da6520
  q-agent-notifier-tunnel-delete_fanout_ddb8fad952b348a8bf12bc5c741d0a250
  q-agent-notifier-tunnel-update0
  q-agent-notifier-tunnel-update.ubuntu64   0
  q-agent-notifier-tunnel-update_fanout_1e0b0f7ca63f404ba5f41def9d12f00d0
  q-agent-notifier-tunnel-update_fanout_e86e9b073ec74766b9e755439827badc
1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586470] [NEW] ironic driver is raising exception when some of node's instance cannot be found

2016-05-27 Thread yong sheng gong
Public bug reported:

Ironic will fail sometimes to clear the node's instance_uuid
even when the instance is deleted on the nova side.

The ironic virt driver should be more tolerant in this case.
Instead of rasing exception, it can LOG warning the failed
instances.

the problem code is at
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L475

** Affects: nova
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586470

Title:
  ironic driver is raising exception when some of node's instance cannot
  be found

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Ironic will fail sometimes to clear the node's instance_uuid
  even when the instance is deleted on the nova side.

  The ironic virt driver should be more tolerant in this case.
  Instead of rasing exception, it can LOG warning the failed
  instances.

  the problem code is at
  https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L475

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1586470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585914] [NEW] should not use 'in' to tell if the boot request is in aggregate azs string

2016-05-26 Thread yong sheng gong
Public bug reported:

code at
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/availability_zone_filter.py#L48
and
https://github.com/openstack/nova/blob/master/nova/tests/unit/scheduler/filters/test_availability_zone_filters.py#L40

shows we can support a list of AZs which is comma separated in one
aggregate.

but the azs is a string in metadata of aggregate, which
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/availability_zone_filter.py#L48
is using 'in' operator.

solution is to:

form a list of azs from aggregate's metadata, and then use 'in'
operator.

** Affects: nova
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Summary changed:

-  should not use 'in' to tell if the boot request in aggregate azs string
+ should not use 'in' to tell if the boot request is in aggregate azs string

** Changed in: nova
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585914

Title:
  should not use 'in' to tell if the boot request is in aggregate azs
  string

Status in OpenStack Compute (nova):
  New

Bug description:
  code at
  
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/availability_zone_filter.py#L48
  and
  
https://github.com/openstack/nova/blob/master/nova/tests/unit/scheduler/filters/test_availability_zone_filters.py#L40

  shows we can support a list of AZs which is comma separated in one
  aggregate.

  but the azs is a string in metadata of aggregate, which
  
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/availability_zone_filter.py#L48
  is using 'in' operator.

  solution is to:

  form a list of azs from aggregate's metadata, and then use 'in'
  operator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580588] [NEW] use network's dns_domain to generate dns_assignment

2016-05-11 Thread yong sheng gong
Public bug reported:

Problem:
currently, the port's dns_assignment is generated by combining the dns_name and 
conf.dns_domain
even if the dns_domain of port's network is given.

expectation:

generate the dns_assignment according to the dns_domain of port's network, 
which will scope the dnsname by
network instead of each neutron deployment.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580588

Title:
  use network's dns_domain to generate dns_assignment

Status in neutron:
  New

Bug description:
  Problem:
  currently, the port's dns_assignment is generated by combining the dns_name 
and conf.dns_domain
  even if the dns_domain of port's network is given.

  expectation:

  generate the dns_assignment according to the dns_domain of port's network, 
which will scope the dnsname by
  network instead of each neutron deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579314] [NEW] there is no haproxy for lbaas v2 doc in devstack README.md

2016-05-07 Thread yong sheng gong
Public bug reported:

octavia is the default plugin driver, but the haproxy is also available.

we should add a hint at https://github.com/openstack/neutron-
lbaas/tree/master/devstack

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579314

Title:
  there is no haproxy for lbaas v2 doc in devstack README.md

Status in neutron:
  In Progress

Bug description:
  octavia is the default plugin driver, but the haproxy is also
  available.

  we should add a hint at https://github.com/openstack/neutron-
  lbaas/tree/master/devstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576275] [NEW] generarted sriov agent config file is wrong with config group

2016-04-28 Thread yong sheng gong
Public bug reported:

the group name should be sriov_nic instead of ml2_sriov

sriov_nic is at 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_sriov/agent/common/config.py#L89
but ml2_sriov is at 
https://github.com/openstack/neutron/blob/master/neutron/opts.py#L283

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

** Description changed:

  the group name should be sriov_nic instead of ml2_sriov
+ 
+ sriov_nic is at 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_sriov/agent/common/config.py#L89
+ but ml2_sriov is at 
https://github.com/openstack/neutron/blob/master/neutron/opts.py#L283

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576275

Title:
  generarted sriov agent config file is wrong with config group

Status in neutron:
  In Progress

Bug description:
  the group name should be sriov_nic instead of ml2_sriov

  sriov_nic is at 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_sriov/agent/common/config.py#L89
  but ml2_sriov is at 
https://github.com/openstack/neutron/blob/master/neutron/opts.py#L283

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1576275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503975] [NEW] no such option: consoleauth_topic

2015-10-08 Thread yong sheng gong
Public bug reported:

[gongysh@fedora22 nova]$ /usr/bin/nova-consoleauth --config-file 
/etc/nova/nova.conf
No handlers could be found for logger "oslo_config.cfg"
2015-10-08 14:25:50.996 31923 CRITICAL nova [-] NoSuchOptError: no such option: 
consoleauth_topic
2015-10-08 14:25:50.996 31923 ERROR nova Traceback (most recent call last):
2015-10-08 14:25:50.996 31923 ERROR nova   File "/usr/bin/nova-consoleauth", 
line 10, in 
2015-10-08 14:25:50.996 31923 ERROR nova sys.exit(main())
2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/mnt/data3/opt/stack/nova/nova/cmd/consoleauth.py", line 40, in main
2015-10-08 14:25:50.996 31923 ERROR nova topic=CONF.consoleauth_topic)
2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1902, in __getattr__
2015-10-08 14:25:50.996 31923 ERROR nova raise NoSuchOptError(name)
2015-10-08 14:25:50.996 31923 ERROR nova NoSuchOptError: no such option: 
consoleauth_topic
2015-10-08 14:25:50.996 31923 ERROR nova

** Affects: nova
     Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: nova
     Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1503975

Title:
  no such option: consoleauth_topic

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  [gongysh@fedora22 nova]$ /usr/bin/nova-consoleauth --config-file 
/etc/nova/nova.conf
  No handlers could be found for logger "oslo_config.cfg"
  2015-10-08 14:25:50.996 31923 CRITICAL nova [-] NoSuchOptError: no such 
option: consoleauth_topic
  2015-10-08 14:25:50.996 31923 ERROR nova Traceback (most recent call last):
  2015-10-08 14:25:50.996 31923 ERROR nova   File "/usr/bin/nova-consoleauth", 
line 10, in 
  2015-10-08 14:25:50.996 31923 ERROR nova sys.exit(main())
  2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/mnt/data3/opt/stack/nova/nova/cmd/consoleauth.py", line 40, in main
  2015-10-08 14:25:50.996 31923 ERROR nova topic=CONF.consoleauth_topic)
  2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1902, in __getattr__
  2015-10-08 14:25:50.996 31923 ERROR nova raise NoSuchOptError(name)
  2015-10-08 14:25:50.996 31923 ERROR nova NoSuchOptError: no such option: 
consoleauth_topic
  2015-10-08 14:25:50.996 31923 ERROR nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1503975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503974] [NEW] no such option: consoleauth_topic

2015-10-08 Thread yong sheng gong
Public bug reported:

[gongysh@fedora22 nova]$ /usr/bin/nova-consoleauth --config-file 
/etc/nova/nova.conf
No handlers could be found for logger "oslo_config.cfg"
2015-10-08 14:25:50.996 31923 CRITICAL nova [-] NoSuchOptError: no such option: 
consoleauth_topic
2015-10-08 14:25:50.996 31923 ERROR nova Traceback (most recent call last):
2015-10-08 14:25:50.996 31923 ERROR nova   File "/usr/bin/nova-consoleauth", 
line 10, in 
2015-10-08 14:25:50.996 31923 ERROR nova sys.exit(main())
2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/mnt/data3/opt/stack/nova/nova/cmd/consoleauth.py", line 40, in main
2015-10-08 14:25:50.996 31923 ERROR nova topic=CONF.consoleauth_topic)
2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1902, in __getattr__
2015-10-08 14:25:50.996 31923 ERROR nova raise NoSuchOptError(name)
2015-10-08 14:25:50.996 31923 ERROR nova NoSuchOptError: no such option: 
consoleauth_topic
2015-10-08 14:25:50.996 31923 ERROR nova

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1503974

Title:
  no such option: consoleauth_topic

Status in OpenStack Compute (nova):
  New

Bug description:
  [gongysh@fedora22 nova]$ /usr/bin/nova-consoleauth --config-file 
/etc/nova/nova.conf
  No handlers could be found for logger "oslo_config.cfg"
  2015-10-08 14:25:50.996 31923 CRITICAL nova [-] NoSuchOptError: no such 
option: consoleauth_topic
  2015-10-08 14:25:50.996 31923 ERROR nova Traceback (most recent call last):
  2015-10-08 14:25:50.996 31923 ERROR nova   File "/usr/bin/nova-consoleauth", 
line 10, in 
  2015-10-08 14:25:50.996 31923 ERROR nova sys.exit(main())
  2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/mnt/data3/opt/stack/nova/nova/cmd/consoleauth.py", line 40, in main
  2015-10-08 14:25:50.996 31923 ERROR nova topic=CONF.consoleauth_topic)
  2015-10-08 14:25:50.996 31923 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1902, in __getattr__
  2015-10-08 14:25:50.996 31923 ERROR nova raise NoSuchOptError(name)
  2015-10-08 14:25:50.996 31923 ERROR nova NoSuchOptError: no such option: 
consoleauth_topic
  2015-10-08 14:25:50.996 31923 ERROR nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1503974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504370] [NEW] track_quota_usage conf variable is not in neutron.conf

2015-10-08 Thread yong sheng gong
Public bug reported:

by default, neutron configurable items should be put in configure files for 
deployers to configure.
track_quota_usage is missing there.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504370

Title:
  track_quota_usage conf  variable is not in  neutron.conf

Status in neutron:
  In Progress

Bug description:
  by default, neutron configurable items should be put in configure files for 
deployers to configure.
  track_quota_usage is missing there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-09-02 Thread yong sheng gong
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Ironic:
  Fix Committed
Status in neutron:
  New

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490815] [NEW] subnetallcationpool should not be an extension

2015-08-31 Thread yong sheng gong
Public bug reported:

look at 
https://github.com/openstack/neutron/blob/master/neutron/extensions/subnetallocation.py,
which defines an extension Subnetallocation but defines no extension resource 
at all. Actually, it is implemented
in core resource.
So I think we should remove this extension.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490815

Title:
  subnetallcationpool should not be an extension

Status in neutron:
  New

Bug description:
  look at 
https://github.com/openstack/neutron/blob/master/neutron/extensions/subnetallocation.py,
  which defines an extension Subnetallocation but defines no extension resource 
at all. Actually, it is implemented
  in core resource.
  So I think we should remove this extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490389] [NEW] cannot receive extra args for ostestr via tox

2015-08-30 Thread yong sheng gong
Public bug reported:

ostestr http://docs.openstack.org/developer/os-testr/ostestr.html has many more 
arguments to run test cases. but out tox.ini
is limits the usage to just --regex.

Such as --serial to run cases in serally etc.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

** Description changed:

  ostestr http://docs.openstack.org/developer/os-testr/ostestr.html has many 
more arguments to run test cases. but out tox.ini
  is limits the usage to just --regex.
+ 
+ Such as --serial to run cases in serally etc.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490389

Title:
  cannot receive extra args for ostestr via tox

Status in neutron:
  In Progress

Bug description:
  ostestr http://docs.openstack.org/developer/os-testr/ostestr.html has many 
more arguments to run test cases. but out tox.ini
  is limits the usage to just --regex.

  Such as --serial to run cases in serally etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488868] [NEW] failed to run test_qos_plugin.TestQosPlugin independently

2015-08-26 Thread yong sheng gong
 
 Runtime (s)
---
  ---
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_create_policy_rule
0.396
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_add_policy   
 0.391
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_update_policy_rule
0.389
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_delete_policy_rule
0.362
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_get_policy_bandwidth_limit_rules_for_nonexistent_policy
   0.145
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_create_policy_rule_for_nonexistent_policy
 0.133
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_delete_policy
 0.133
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_delete_policy_rule_for_nonexistent_policy
 0.126
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_get_policy_bandwidth_limit_rules_for_policy_with_filters
  0.044
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_get_policy_for_nonexistent_policy
 0.031
ERROR: InvocationError: '/mnt/data3/opt/stack/neutron/.tox/py27/bin/ostestr 
--regex neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin'
_
 summary 
_
ERROR:   py27: commands failed

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488868

Title:
  failed to run test_qos_plugin.TestQosPlugin independently

Status in neutron:
  In Progress

Bug description:
  
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_delete_policy_rule
  
-

  Captured pythonlogging:
  ~~~
  2015-08-26 17:13:10,783  WARNING [oslo_config.cfg] Option verbose from 
group DEFAULT is deprecated for removal.  Its value may be silently ignored 
in the future.
  2015-08-26 17:13:10,799 INFO [neutron.manager] Loading core plugin: 
neutron.db.db_base_plugin_v2.NeutronDbPluginV2
  2015-08-26 17:13:10,800  WARNING [neutron.notifiers.nova] Authenticating 
to nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
  2015-08-26 17:13:10,802 INFO [neutron.manager] Loading Plugin: qos
  2015-08-26 17:13:10,804 INFO 
[neutron.services.qos.notification_drivers.manager] Loading message_queue 
(Message queue updates) notification driver for QoS plugin
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File neutron/tests/unit/services/qos/test_qos_plugin.py, line 111, in 
test_delete_policy_rule
  self.ctxt, self.rule.id, self.policy.id)
File neutron/services/qos/qos_plugin.py, line 128, in 
delete_policy_bandwidth_limit_rule
  policy.reload_rules()
File neutron/objects/qos/policy.py, line 63, in reload_rules
  rules = rule_obj_impl.get_rules(self._context, self.id)
File neutron/objects/qos/rule.py, line 37, in get_rules
  rules = rule_cls.get_objects(context, qos_policy_id=qos_policy_id)
File neutron/objects/base.py, line 122, in get_objects
  db_objs = db_api.get_objects(context, cls.db_model, **kwargs)
File neutron/db/api.py, line 87, in get_objects
  .filter_by(**kwargs)
File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2399, in all
  return list(self)
File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2516, in __iter__
  return self._execute_and_instances(context)
File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2531, in _execute_and_instances
  result = conn.execute(querycontext.statement, self._params)
File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 914, in execute
  return meth(self, multiparams, params)
File 
/mnt/data3/opt/stack/neutron

[Yahoo-eng-team] [Bug 1487324] [NEW] failed to list qos rule type due to policy check

2015-08-21 Thread yong sheng gong
 extension EntryPoint.parse('value = 
cliff.formatters.value:ValueFormatter')
DEBUG: neutronclient.neutron.v2_0.qos.rule.ListQoSRuleTypes 
get_data(Namespace(columns=[], fields=[], formatter='table', max_width=0, 
page_size=None, quote_mode='nonnumeric', request_format='json', 
show_details=False, sort_dir=[], sort_key=[]))
DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://172.17.42.1:5000/v2.0/tokens
DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://172.17.42.1:9696/v2.0/qos/rule-types.json -H User-Agent: 
python-neutronclient -H Accept: application/json -H X-Auth-Token: 
{SHA1}cbf58ad3ce9ff5b3eb3b7e8043ca6699841277b3
DEBUG: keystoneclient.session RESP: [500] Date: Fri, 21 Aug 2015 05:52:42 GMT 
Connection: keep-alive Content-Type: application/json; charset=UTF-8 
Content-Length: 211 X-Openstack-Request-Id: 
req-ba182095-d12d-4bde-a47e-88507e4c4898 
RESP BODY: {NeutronError: {message: Failed to check policy 
tenant_id:%(tenant_id)s because Unable to verify match:%(tenant_id)s as the 
parent resource: tenant was not found, type: PolicyCheckError, detail: 
}}

DEBUG: neutronclient.v2_0.client Error message: {NeutronError: {message: 
Failed to check policy tenant_id:%(tenant_id)s because Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found, type: 
PolicyCheckError, detail: }}
ERROR: neutronclient.shell Failed to check policy tenant_id:%(tenant_id)s 
because Unable to verify match:%(tenant_id)s as the parent resource: tenant was 
not found
Traceback (most recent call last):
  File /mnt/data3/opt/stack/python-neutronclient/neutronclient/shell.py, line 
817, in run_subcommand
return run_command(cmd, cmd_parser, sub_argv)
  File /mnt/data3/opt/stack/python-neutronclient/neutronclient/shell.py, line 
111, in run_command
return cmd.run(known_args)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/common/command.py, 
line 29, in run
return super(OpenStackCommand, self).run(parsed_args)
  File /usr/lib/python2.7/site-packages/cliff/display.py, line 92, in run
column_names, data = self.take_action(parsed_args)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/common/command.py, 
line 35, in take_action
return self.get_data(parsed_args)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/neutron/v2_0/__init__.py,
 line 716, in get_data
data = self.retrieve_list(parsed_args)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/neutron/v2_0/__init__.py,
 line 679, in retrieve_list
data = self.call_server(neutron_client, search_opts, parsed_args)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/neutron/v2_0/__init__.py,
 line 651, in call_server
data = obj_lister(**search_opts)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
102, in with_params
ret = self.function(instance, *args, **kwargs)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
1706, in list_qos_rule_types
retrieve_all, **_params)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
307, in list
for r in self._pagination(collection, path, **params):
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
320, in _pagination
res = self.get(path, params=params)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
293, in get
headers=headers, params=params)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
270, in retry_request
headers=headers, params=params)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
211, in do_request
self._handle_fault_response(status_code, replybody)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
185, in _handle_fault_response
exception_handler_v20(status_code, des_error_body)
  File 
/mnt/data3/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 
70, in exception_handler_v20
status_code=status_code)
InternalServerError: Failed to check policy tenant_id:%(tenant_id)s because 
Unable to verify match:%(tenant_id)s as the parent resource: tenant was not 
found

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487324

Title:
  failed to list qos rule type due to policy check

Status in neutron:
  New

Bug description:
  2015-08-21 13:52:36.212 23375 INFO neutron.wsgi [-] (23375) accepted 
('192.168.1.118', 43606)
  2015-08-21 13:52:42.711 ERROR neutron.policy 
[req-ba182095-d12d-4bde-a47e-88507e4c4898 demo demo] Unable to verify 
match:%(tenant_id)s as the parent

[Yahoo-eng-team] [Bug 1487275] [NEW] add an API to show services registered in neutron deployment

2015-08-20 Thread yong sheng gong
Public bug reported:

problem description:
===

There is no API for operator to know what kind of services enabled by
neutron via 'service_plugins' in neutron.conf.

After change:
==
provides an API for administrator to look up the services enabled in neutron.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487275

Title:
  add an API to show services registered in neutron deployment

Status in neutron:
  New

Bug description:
  problem description:
  ===

  There is no API for operator to know what kind of services enabled by
  neutron via 'service_plugins' in neutron.conf.

  After change:
  ==
  provides an API for administrator to look up the services enabled in neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486388] [NEW] use timestamp of resources to reduce the agent sync load

2015-08-19 Thread yong sheng gong
Public bug reported:

Problem Description
===

agent needs to resync with neutron server for some kind of reasons time to time.
These syncs will consume lots of resources for neutron server, database, 
message queues etc.


Proposed Change
===

add update timestamp to neutron resources,
 keep related resources and their synced time stamp on agent sides
 when resync is needed, agent sends resync time stamp to nerutron server, 
neutron server compares the time stamp with its resources, and send newer 
resources to agent then.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486388

Title:
  use timestamp of resources to reduce the agent sync load

Status in neutron:
  New

Bug description:
  Problem Description
  ===

  agent needs to resync with neutron server for some kind of reasons time to 
time.
  These syncs will consume lots of resources for neutron server, database, 
message queues etc.

  
  Proposed Change
  ===

  add update timestamp to neutron resources,
   keep related resources and their synced time stamp on agent sides
   when resync is needed, agent sends resync time stamp to nerutron server, 
neutron server compares the time stamp with its resources, and send newer 
resources to agent then.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486354] [NEW] DHCP namespace per VM

2015-08-18 Thread yong sheng gong
Public bug reported:

Problem Description
===

How many namespaces can a linux host have without performance penalty?

with a test, we found the linux box slows down significantly with about
300 namespaces.


Proposed Change
===

Add a configuration item to allow dhcp agent create one namespace per VM

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486354

Title:
  DHCP namespace per VM

Status in neutron:
  New

Bug description:
  Problem Description
  ===

  How many namespaces can a linux host have without performance penalty?

  with a test, we found the linux box slows down significantly with
  about 300 namespaces.


  Proposed Change
  ===

  Add a configuration item to allow dhcp agent create one namespace per
  VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475944] [NEW] devref client_command_extensions.rst refers to broken URL

2015-07-18 Thread yong sheng gong
Public bug reported:

http://docs.openstack.org/developer/python-neutronclient/devref/client_command_extensions.html
 is refered to at
https://github.com/openstack/neutron/blob/master/doc/source/devref/client_command_extensions.rst
but it is not there.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475944

Title:
  devref client_command_extensions.rst refers to broken URL

Status in neutron:
  New

Bug description:
  
http://docs.openstack.org/developer/python-neutronclient/devref/client_command_extensions.html
 is refered to at
  
https://github.com/openstack/neutron/blob/master/doc/source/devref/client_command_extensions.rst
  but it is not there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473859] [NEW] too much extended port_dict is annoying

2015-07-13 Thread yong sheng gong
:36:03.351 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:03.351 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:03.352 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:03.352 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:03.353 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:03.361 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:03.361 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:03.373 INFO neutron.plugins.ml2.managers 
[req-3765a37b-f4b8-4643-9281-b613827dd426 None None] Extended subnet dict for 
driver 'port_security'
2015-07-13 14:36:03.379 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:36:04.206 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:04.219 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
2015-07-13 14:36:04.234 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:04.235 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:04.844 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:05.210 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:05.211 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
2015-07-13 14:36:05.228 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473859

Title:
  too much extended port_dict is annoying

Status in neutron:
  New

Bug description:
  
  2015-07-13 14:35:58.525 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:58.526 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:58.542 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
  2015-07-13 14:35:59.355 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:59.369 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
  2015-07-13 14:35:59.389 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:59.389 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:59.413 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:59.429 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended network dict for 
driver 'port_security'
  2015-07-13 14:35:59.446 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:59.446 INFO neutron.plugins.ml2.managers 
[req-50d1f575-64c8-43cc-a386-3555c03a0f1d None None] Extended port dict for 
driver 'port_security'
  2015-07-13 14:35:59.468 INFO neutron.plugins.ml2.managers 
[req

[Yahoo-eng-team] [Bug 1468175] [NEW] allow to specify subnet to create a floatingip

2015-06-23 Thread yong sheng gong
Public bug reported:

the BP https://blueprints.launchpad.net/neutron/+spec/allow-specific-
floating-ip-address allows us to specify an IP  when creating a
floatingip. In some cases, we have more than one subnets on the external
network, and have different quality and toll policy on them.  To address
these kind of cases, we should allow user to choose subnet to create
floatingip.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Description changed:

- 
- the BP 
https://blueprints.launchpad.net/neutron/+spec/allow-specific-floating-ip-address
 allows us to specify a IP  when creating a floatingip. In some cases, we have 
more than one subnets on the external network, and have different quality and 
toll policy on them.  To address these kind of cases, we should allow user to 
choose subnet to create floatingip.
+ the BP https://blueprints.launchpad.net/neutron/+spec/allow-specific-
+ floating-ip-address allows us to specify an IP  when creating a
+ floatingip. In some cases, we have more than one subnets on the external
+ network, and have different quality and toll policy on them.  To address
+ these kind of cases, we should allow user to choose subnet to create
+ floatingip.

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468175

Title:
  allow to specify subnet to create a floatingip

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the BP https://blueprints.launchpad.net/neutron/+spec/allow-specific-
  floating-ip-address allows us to specify an IP  when creating a
  floatingip. In some cases, we have more than one subnets on the
  external network, and have different quality and toll policy on them.
  To address these kind of cases, we should allow user to choose subnet
  to create floatingip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425844] [NEW] metadata start with db connections

2015-02-26 Thread yong sheng gong
Public bug reported:

metadata agent should not try to connect db since it does not need db
connections at all

$ neutron-metadata-agent --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/metadata_agent.ini

2015-02-26 17:06:09.768 3045 WARNING oslo_db.sqlalchemy.session [-] SQL 
connection failed. 10 attempts left.
10.16.91.1:5672

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1425844

Title:
  metadata start with db connections

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  metadata agent should not try to connect db since it does not need db
  connections at all

  $ neutron-metadata-agent --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/metadata_agent.ini

  2015-02-26 17:06:09.768 3045 WARNING oslo_db.sqlalchemy.session [-] SQL 
connection failed. 10 attempts left.
  10.16.91.1:5672

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1425844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425833] [NEW] WorkService class in metadata agent file is not used

2015-02-25 Thread yong sheng gong
Public bug reported:

https://github.com/openstack/neutron/blob/master/neutron/agent/metadata/agent.py#L276
class WorkerService is not used

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1425833

Title:
  WorkService class in metadata agent file is not used

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/metadata/agent.py#L276
  class WorkerService is not used

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1425833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416813] [NEW] default security group table's name is in singular format

2015-02-01 Thread yong sheng gong
Public bug reported:

In general, the tables' name is in plular format, but the default
security group table's name is singular one

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416813

Title:
  default security group table's name is in singular format

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In general, the tables' name is in plular format, but the default
  security group table's name is singular one

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413439] [NEW] dependency of networking_odl is missing in README.odl

2015-01-21 Thread yong sheng gong
Public bug reported:

README.odl does not mention the dependency of external project
networking_odl, which is used by opendaylight mechanism driver.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413439

Title:
  dependency of networking_odl is missing in README.odl

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  README.odl does not mention the dependency of external project
  networking_odl, which is used by opendaylight mechanism driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411874] Re: DVR router is seen on multiple controller nodes

2015-01-17 Thread yong sheng gong
Swaminathan Vasudevan (swaminathan-vasudevan)  gave a good analysis  at
https://bugs.launchpad.net/neutron/+bug/1358718. So I think this is not
a bug.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411874

Title:
  DVR router is seen on multiple controller nodes

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I have Openstack deployed with DVR support.

  I did following to setup my networks

  neutron net-create demo-net
  netdemoaid=$(neutron net-list | awk '{if($4=='demo-net'){print $2;}}')
  neutron subnet-create demo-net 10.100.102.0/24 --name demo-subnet
  subnetdemoid=$(neutron subnet-list | awk '{if($4=='demo-subnet'){print 
$2;}}')
  neutron router-create demo-router
  routerdemoid=$(neutron router-list | awk '{if($4=='demo-router'){print 
$2;}}')
  exnetid=$(neutron net-list | awk '{if($4=='ext-net'){print $2;}}')
  neutron router-gateway-set $routerdemoid $exnetid
  neutron router-interface-add demo-router $subnetdemoid

  root@Linux:/tmp#  neutron l3-agent-list-hosting-router 
9182ecba-00ac-42ce-b0d1-801003bb1b14
  
+--+--++---+
  | id   | host 
| admin_state_up | alive |
  
+--+--++---+
  | 2d6505ef-e7a5-4672-a040-3a50f764a193 | controller-controller1-25ic4624koca 
| True   | :-)   |
  
+--+--++---+

  Then I boot an instance:
  nova boot --image cirros --flavor m1.tiny --key-name default --nic 
net-id=$netdemoid cirrosinstance
  root@Linux:/tmp#  neutron l3-agent-list-hosting-router 
9182ecba-00ac-42ce-b0d1-801003bb1b14
  
+--+-++---+
  | id   | host 
   | admin_state_up | alive |
  
+--+-++---+
  | 2d6505ef-e7a5-4672-a040-3a50f764a193 | controller-controller1-25ic4624koca  
  | True   | :-)   |
  | 339eb96b-9c9d-400a-a45f-e0f2ed6ebf2e | controller-controller0-twwogbnv  
  | True   | :-)   |
  | 6b1cc42d-f701-47ba-a0fc-661fb3bc22ae | 
novacompute2-novacompute2-b4jb7jtppnas | True   | :-)   |
  | fdffb433-b5f6-4d10-bdc4-2030a9800b2a | controller-controller2-7piq4lqrgtma  
  | True   | :-)   |
  
+--+-++---+

  We have qrouter binded to three controllers instead of just one controller.
  We have qrouter binded to novacompute2 because we have DVR enabled, and 
instance is on this compute node.
  Look at ip netns, only have SNAT on controller1
  DHCP servers for subnetdemo are running on controller0 and controller2

  We only need qrouter bind to one controller, not three controllers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341268] Re: mac_address port attribute is read-only

2014-07-13 Thread yong sheng gong
agree with garyk,  if we can create a new port with a specified mac, why
should we make the port updatable?

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341268

Title:
  mac_address port attribute is read-only

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Ironic uses Neutron for IPAM - IP address allocation and DHCP, same as
  nova with e.g. KVM or Xen. Unlike a virtual machine hypervisor,
  Ironics real machines sometimes develop hardware faults, and parts
  need to be replaced - and when that happens the port MAC changes.
  Having the port's mac address be read-only makes this much more
  intrusive than it would otherwise be - we have to tear down and
  rebuild the whole machine, rather than just replace the card and tell
  Neutron the new MAC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341244] Re: ovs_neutron_plugin doesn't implement multiple RPC workers

2014-07-13 Thread yong sheng gong
ovs_neutron_plugin is deprecated and will be soon removed.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341244

Title:
  ovs_neutron_plugin doesn't implement multiple RPC workers

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  ovs_neutron_plugin doesn't implement multiple RPC workers, only ML2
  supports multiple RPC workers.

  Version - Icehouse with RHEL7
  All-In-One+ Compute node

  openstack-nova-cert-2014.1-7.el7ost.noarch
  openstack-neutron-openvswitch-2014.1-35.el7ost.noarch
  openstack-nova-compute-2014.1-7.el7ost.noarch
  openstack-neutron-2014.1-35.el7ost.noarch
  openstack-neutron-ml2-2014.1-35.el7ost.noarch

  Steps to reproduce:

  1. Set Multiple RPC/API within neutron.conf file to 8 (the amount of cores of 
the bare metal host)
  2. run: openstack service restart.
  3. In deployment with ML2 plugin all 17 processes are spawn.
  4. In deployment with Openwswith plugin only 9 processes of API are spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337912] [NEW] nova creates default sg even if neutron sg is configured

2014-07-04 Thread yong sheng gong
Public bug reported:

default security group in nova is created even if we are using neutron
sg.

** Affects: nova
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337912

Title:
  nova creates default sg even if neutron sg is configured

Status in OpenStack Compute (Nova):
  New

Bug description:
  default security group in nova is created even if we are using neutron
  sg.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334926] [NEW] floatingip still working once connected even after it is disociated

2014-06-26 Thread yong sheng gong
Public bug reported:

After we create an SSH connection to a VM via its floating ip, even
though we have removed the floating ip association, we can still access
the VM via that connection. Namely, SSH is not disconnected when the
floating ip is not valid

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334926

Title:
  floatingip still working once connected even after it is disociated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  After we create an SSH connection to a VM via its floating ip, even
  though we have removed the floating ip association, we can still
  access the VM via that connection. Namely, SSH is not disconnected
  when the floating ip is not valid

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1334926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321871] Re: Improve help strings for python-neutronclient

2014-05-22 Thread yong sheng gong
** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1321871

Title:
  Improve help strings for python-neutronclient

Status in Python client library for Neutron:
  New

Bug description:
  Improve the help strings so that they follow the rules for help
  strings (see
  http://docs.openstack.org/developer/oslo.config/styleguide.html ) and
  improve content in a consistent way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1321871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301262] [NEW] icmp type and code are not used at all in SG feature

2014-04-02 Thread yong sheng gong
Public bug reported:

I created a sg rule which is like:
neutron security-group-rule-create 39f7bff9-4a55-4813-be3d-1d89f8c5a95b 
--protocol icmp --direction ingress --ethertype ipv4 --port-range-min -1 
--port-range-max 4

and when it is converted into iptables rule:
it is just like:
-A runpy.py-ib551e32d-4 -m state --state INVALID -j DROP
-A runpy.py-ib551e32d-4 -m state --state RELATED,ESTABLISHED -j RETURN
-A runpy.py-ib551e32d-4 -p icmp -j RETURN

It is obvious, the type and code of icmp is not used at all.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  I created a sg rule which is like:
  neutron security-group-rule-create 39f7bff9-4a55-4813-be3d-1d89f8c5a95b 
--protocol icmp --direction ingress --ethertype ipv4 --port-range-min -1 
--port-range-max 4
  
  and when it is converted into iptables rule:
  it is just like:
  -A runpy.py-ib551e32d-4 -m state --state INVALID -j DROP
  -A runpy.py-ib551e32d-4 -m state --state RELATED,ESTABLISHED -j RETURN
  -A runpy.py-ib551e32d-4 -p icmp -j RETURN
  
- It is obviously, the type and code of icmp is not used at all.
+ It is obvious, the type and code of icmp is not used at all.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301262

Title:
  icmp type and code are not used at all  in SG feature

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I created a sg rule which is like:
  neutron security-group-rule-create 39f7bff9-4a55-4813-be3d-1d89f8c5a95b 
--protocol icmp --direction ingress --ethertype ipv4 --port-range-min -1 
--port-range-max 4

  and when it is converted into iptables rule:
  it is just like:
  -A runpy.py-ib551e32d-4 -m state --state INVALID -j DROP
  -A runpy.py-ib551e32d-4 -m state --state RELATED,ESTABLISHED -j RETURN
  -A runpy.py-ib551e32d-4 -p icmp -j RETURN

  It is obvious, the type and code of icmp is not used at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294500] [NEW] try_to_bind_segment_for_agent should return false or true

2014-03-19 Thread yong sheng gong
Public bug reported:

according to comment of the function at
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_agent.py#L99
it should return True or False

** Affects: neutron
 Importance: Medium
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294500

Title:
  try_to_bind_segment_for_agent should return false or true

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  according to comment of the function at
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_agent.py#L99
  it should return True or False

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294526] [NEW] floatingip's id should be used instead of floatingip itself

2014-03-19 Thread yong sheng gong
Public bug reported:

https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L476

this id should be used as hash key instead of floating ip itself

** Affects: neutron
 Importance: Medium
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294526

Title:
  floatingip's id should be used instead of floatingip itself

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L476

  this id should be used as hash key instead of floating ip itself

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289195] Re: Duplicate security group name cause fail to start instance

2014-03-11 Thread yong sheng gong
If the ID will cover this case, I think the bug is invalid!

** Changed in: neutron
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289195

Title:
  Duplicate security group name cause fail to start instance

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  When create a security group, the duplicate name is allowed.
  In create a instance, duplicate sg name will cause exception and the instance 
will be started fail. So the duplicate name of sg should be not allowed.

  In nova.network.neutronv2.API:allocate_for_instance
  for security_group in security_groups:
  name_match = None
  uuid_match = None
  for user_security_group in user_security_groups:
  if user_security_group['name'] == security_group: # if have 
duplicate sg name, the name_match will not be None for the second matching.
  if name_match:
  raise exception.NoUniqueMatch(
  _(Multiple security groups found matching
      '%s'. Use an ID to be more specific.) %
  security_group)

  name_match = user_security_group['id']
  if user_security_group['id'] == security_group:
  uuid_match = user_security_group['id']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279739] [NEW] nova.cmd.rpc_zmq_receiver:main is missing

2014-02-13 Thread yong sheng gong
Public bug reported:

I am trying to run devstack with zeromq, but the zeromq failed.

al/bin/nova-rpc-zmq-receiver  echo $! /opt/stack/status/stack/zeromq.pid; fg 
|| echo zeromq failed to start | tee /opt/stack/status/stack/zeromq.failure
[1] 25102
cd /opt/stack/nova  /usr/local/bin/nova-rpc-zmq-receiver
Traceback (most recent call last):
  File /usr/local/bin/nova-rpc-zmq-receiver, line 6, in module
from nova.cmd.rpc_zmq_receiver import main
ImportError: No module named rpc_zmq_receiver
zeromq failed to start


I found at https://github.com/openstack/nova/blob/master/setup.cfg:
nova-rpc-zmq-receiver = nova.cmd.rpc_zmq_receiver:main

but at https://github.com/openstack/nova/tree/master/nova/cmd:
we have no rpc_zmq_receiver module at all.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279739

Title:
  nova.cmd.rpc_zmq_receiver:main is missing

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am trying to run devstack with zeromq, but the zeromq failed.

  al/bin/nova-rpc-zmq-receiver  echo $! /opt/stack/status/stack/zeromq.pid; 
fg || echo zeromq failed to start | tee 
/opt/stack/status/stack/zeromq.failure
  [1] 25102
  cd /opt/stack/nova  /usr/local/bin/nova-rpc-zmq-receiver
  Traceback (most recent call last):
File /usr/local/bin/nova-rpc-zmq-receiver, line 6, in module
  from nova.cmd.rpc_zmq_receiver import main
  ImportError: No module named rpc_zmq_receiver
  zeromq failed to start

  
  I found at https://github.com/openstack/nova/blob/master/setup.cfg:
  nova-rpc-zmq-receiver = nova.cmd.rpc_zmq_receiver:main

  but at https://github.com/openstack/nova/tree/master/nova/cmd:
  we have no rpc_zmq_receiver module at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279644] [NEW] unused imports in neutron/agent/linux/async_process.py

2014-02-12 Thread yong sheng gong
Public bug reported:

import eventlet  - unused
import eventlet.event
import eventlet.queue
import eventlet.timeout - unused

** Affects: neutron
 Importance: Low
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279644

Title:
  unused imports in neutron/agent/linux/async_process.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  import eventlet  - unused
  import eventlet.event
  import eventlet.queue
  import eventlet.timeout - unused

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267310] [NEW] port-list should not list the dhcp ports for normal user

2014-01-08 Thread yong sheng gong
Public bug reported:

with non-admin user, I can list the dhcp port, and If I tried to update
the fixed ips of these dhcp ports, it does not reflect to dhcpagent at
all, I mean the nic device's ip in the dhcp namesapce.

So I think we should not allow normal user to view the dhcp port at the first 
place.
[root@controller ~]# neutron port-list
+--+--+---+--+
| id   | name | mac_address   | fixed_ips   
 |
+--+--+---+--+
| 1a5a2236-9b66-4b6d-953d-664fad6be3bb |  | fa:16:3e:cf:52:b3 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.3} 
 |
| 381e244e-4012-4a49-83d3-f252fa4e41a1 |  | fa:16:3e:cf:94:bd | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.7} 
 |
| 3bba05d3-10ec-49f1-9335-1103f791584b |  | fa:16:3e:fe:aa:6f | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.6} 
 |
| 939d5696-0780-40c6-a626-a9a9df933553 |  | fa:16:3e:c7:5b:73 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.4} 
 |
| ad89d303-9e8c-43bb-a029-b341340a92bb |  | fa:16:3e:21:6d:98 | 
{subnet_id: c8e59b09-60d3-4996-8692-02334ee0e658, ip_address: 
192.168.230.3} |
| cb350109-39d3-444c-bc33-538c22415171 |  | fa:16:3e:f4:d3:e8 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.5} 
 |
| d1e79c7c-d500-475f-8e21-2c1958f0a136 |  | fa:16:3e:2d:c7:a1 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.1} 
 |
| ddc076f6-16aa-4f12-9745-2ac27dd5a38a |  | fa:16:3e:e0:04:44 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.8} 
 |
| f2a4df5c-e719-46cc-9bdb-bf9771a2c205 |  | fa:16:3e:01:73:5e | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.2} 
 |
+--+--+---+--+
[root@controller ~]# neutron port-show 1a5a2236-9b66-4b6d-953d-664fad6be3bb
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| device_id | 
dhcpd3377d3c-a0d1-5d71-9947-f17125c357bb-20f45603-b76a-4a89-9674-0127e39fc895   
|
| device_owner  | network:dhcp  
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, 
ip_address: 10.0.1.3} |
| id| 1a5a2236-9b66-4b6d-953d-664fad6be3bb  
  |
| mac_address   | fa:16:3e:cf:52:b3 
  |
| name  |   
  |
| network_id| 20f45603-b76a-4a89-9674-0127e39fc895  
  |
| security_groups   |   
  |
| status| ACTIVE
  |
| tenant_id | c8a625a4c71b401681e25e3ad294b255  
  |
+---+-+

** Affects: neutron
 Importance: High
 Assignee: yong sheng gong (gongysh)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267310

Title:
  port-list should not list the dhcp ports for normal user

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  with non-admin user, I can list the dhcp port, and If I tried to
  update the fixed ips of these dhcp ports, it does not reflect to
  dhcpagent at all, I mean the nic device's ip in the dhcp namesapce.

  So I think we should not allow normal user to view the dhcp port at the first 
place.
  [root@controller ~]# neutron port

[Yahoo-eng-team] [Bug 1263551] Re: neutron port-list -f csv outputs poorly formatted JSON strings.

2013-12-22 Thread yong sheng gong
** Changed in: neutron
   Importance: Undecided = Medium

** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263551

Title:
  neutron port-list -f csv outputs poorly formatted JSON strings.

Status in Python client library for Neutron:
  Confirmed

Bug description:
  I have two IPs attached to a port. One IPv4 address and one IPv6
  address:

  $ neutron port-list -f table | grep '..relevant lines..'
  | f59ec695-d52d-40b0-9d7c-1d8ebe315305 || 
fa:16:3e:7d:99:3a | {subnet_id: 1bb2eda5-5860-4864-84a3-4e31e3a0f130, 
ip_address: 192.168.2.102}   |
  |  || 
  | {subnet_id: 81aa0d62-4b8d-4494-98c8-50b434b067d8, 
ip_address: 2607:f088:0:2::1338} |

  $ neutron port-list -f csv | grep '..relevant lines..'
  
f59ec695-d52d-40b0-9d7c-1d8ebe315305,,fa:16:3e:7d:99:3a,{subnet_id: 
1bb2eda5-5860-4864-84a3-4e31e3a0f130, ip_address: 192.168.2.102}
  {subnet_id: 81aa0d62-4b8d-4494-98c8-50b434b067d8, ip_address: 
2607:f088:0:2::1338}

  Running this through a CSV filter:

  $VAR1 = '10fd0850-7799-4ac7-ae54-09ecb3ac8b8f';
  $VAR2 = '';
  $VAR3 = 'fa:16:3e:37:ac:cc';
  $VAR4 = '{subnet_id: 1bb2eda5-5860-4864-84a3-4e31e3a0f130, ip_address: 
192.168.2.104}
  {subnet_id: 81aa0d62-4b8d-4494-98c8-50b434b067d8, ip_address: 
2607:f088:0:2::133a}';

  Finally, attempting to parse the JSON string ($VAR4) in perl:

  garbage after JSON object, at character offset 85 (before
  {subnet_id: 81aa0...) at ./neutron_ports line 48,
  $NEUTRON_PORTS line 15.

  Indeed, this isn't perl's fault. Putting the string through
  http://jsonlint.com/ comes up with a similar error.

  The two strings need to be contained in a larger structure: [{...},{...}]
  instead of just concatenated together: {...}{...}

  Or the output specification needs to be changed.

  Package/version information:
  # dpkg -l | awk '/neutron/ {print $3   $2}'
  1:2013.2-0ubuntu1 neutron-common
  1:2013.2-0ubuntu1 neutron-dhcp-agent
  1:2013.2-0ubuntu1 neutron-metadata-agent
  1:2013.2-0ubuntu1 neutron-plugin-linuxbridge
  1:2013.2-0ubuntu1 neutron-plugin-linuxbridge-agent
  1:2013.2-0ubuntu1 neutron-plugin-openvswitch
  1:2013.2-0ubuntu1 neutron-server
  1:2013.2-0ubuntu1 python-neutron
  1:2.3.0-0ubuntu1 python-neutronclient

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1263551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259847] [NEW] setup_tunnel_br has many add-flow() called, which can be merged into one call to ovs-ofctl

2013-12-11 Thread yong sheng gong
Public bug reported:

each add-flow will lead to one ovs-ofctl call, use
self.tun_br.defer_apply_on() can improve it.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1259847

Title:
  setup_tunnel_br has many add-flow() called, which can be merged into
  one call to ovs-ofctl

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  each add-flow will lead to one ovs-ofctl call, use
  self.tun_br.defer_apply_on() can improve it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1259847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259937] [NEW] RouterDNatDisabled is not used at all

2013-12-11 Thread yong sheng gong
Public bug reported:

class RouterDNatDisabled seems not be used at all:
https://github.com/openstack/neutron/blob/master/neutron/extensions/l3_ext_gw_mode.py#L27

** Affects: neutron
 Importance: Low
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1259937

Title:
  RouterDNatDisabled is not used at all

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  class RouterDNatDisabled seems not be used at all:
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/l3_ext_gw_mode.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1259937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260185] [NEW] synchronize ovs agent's rpc handler and main thread

2013-12-11 Thread yong sheng gong
Public bug reported:

the various RPC message handlers and the main thread of ovs agent can interfere 
with each other.
So we need to use synchronizer  to make sure only one thread can run at a time.

** Affects: neutron
 Importance: High
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260185

Title:
  synchronize ovs agent's rpc handler and main thread

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  the various RPC message handlers and the main thread of ovs agent can 
interfere with each other.
  So we need to use synchronizer  to make sure only one thread can run at a 
time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259088] [NEW] setup_rpc should be the last thing in __init__ method

2013-12-09 Thread yong sheng gong
Public bug reported:

if setup_rpc is too early, the dispatch maybe dispatch the rpm message to an 
unready agent.  take ovs plugin agent for instance,
after setup_rpc is called, many of the initialization work are still needed to 
be done. If the message is coming during this time, the instance will  not be 
fully initialized:

def __init__(self, integ_br, tun_br, local_ip,
 bridge_mappings, root_helper,
 polling_interval, tunnel_types=None,
 veth_mtu=None, l2_population=False,
 minimize_polling=False,
 ovsdb_monitor_respawn_interval=(
 constants.DEFAULT_OVSDBMON_RESPAWN)):
'''Constructor.

:param integ_br: name of the integration bridge.
:param tun_br: name of the tunnel bridge.
:param local_ip: local IP address of this hypervisor.
:param bridge_mappings: mappings from physical network name to bridge.
:param root_helper: utility to use when running shell cmds.
:param polling_interval: interval (secs) to poll DB.
:param tunnel_types: A list of tunnel types to enable support for in
   the agent. If set, will automatically set enable_tunneling to
   True.
:param veth_mtu: MTU size for veth interfaces.
:param minimize_polling: Optional, whether to minimize polling by
   monitoring ovsdb for interface changes.
:param ovsdb_monitor_respawn_interval: Optional, when using polling
   minimization, the number of seconds to wait before respawning
   the ovsdb monitor.
'''
self.veth_mtu = veth_mtu
self.root_helper = root_helper
self.available_local_vlans = set(xrange(q_const.MIN_VLAN_TAG,
q_const.MAX_VLAN_TAG))
self.tunnel_types = tunnel_types or []
self.l2_pop = l2_population
self.agent_state = {
'binary': 'neutron-openvswitch-agent',
'host': cfg.CONF.host,
'topic': q_const.L2_AGENT_TOPIC,
'configurations': {'bridge_mappings': bridge_mappings,
   'tunnel_types': self.tunnel_types,
   'tunneling_ip': local_ip,
   'l2_population': self.l2_pop},
'agent_type': q_const.AGENT_TYPE_OVS,
'start_flag': True}

# Keep track of int_br's device count for use by _report_state()
self.int_br_device_count = 0

self.int_br = ovs_lib.OVSBridge(integ_br, self.root_helper)
self.setup_rpc()
self.setup_integration_br()
self.setup_physical_bridges(bridge_mappings)
self.local_vlan_map = {}
self.tun_br_ofports = {constants.TYPE_GRE: {},
   constants.TYPE_VXLAN: {}}

self.polling_interval = polling_interval
self.minimize_polling = minimize_polling
self.ovsdb_monitor_respawn_interval = ovsdb_monitor_respawn_interval

if tunnel_types:
self.enable_tunneling = True
else:
self.enable_tunneling = False
self.local_ip = local_ip
self.tunnel_count = 0
self.vxlan_udp_port = cfg.CONF.AGENT.vxlan_udp_port
self._check_ovs_version()
if self.enable_tunneling:
self.setup_tunnel_br(tun_br)
# Collect additional bridges to monitor
self.ancillary_brs = self.setup_ancillary_bridges(integ_br, tun_br)

# Security group agent supprot
self.sg_agent = OVSSecurityGroupAgent(self.context,
  self.plugin_rpc,
  root_helper)
# Initialize iteration counter
self.iter_num = 0

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1259088

Title:
  setup_rpc should be the last thing in __init__ method

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  if setup_rpc is too early, the dispatch maybe dispatch the rpm message to an 
unready agent.  take ovs plugin agent for instance,
  after setup_rpc is called, many of the initialization work are still needed 
to be done. If the message is coming during this time, the instance will  not 
be fully initialized:

  def __init__(self, integ_br, tun_br, local_ip,
   bridge_mappings, root_helper,
   polling_interval, tunnel_types=None,
   veth_mtu=None, l2_population=False,
   minimize_polling=False,
   ovsdb_monitor_respawn_interval=(
   constants.DEFAULT_OVSDBMON_RESPAWN)):
  '''Constructor.

  :param integ_br: name of the integration

[Yahoo-eng-team] [Bug 1259431] [NEW] plugin variable name should be agent

2013-12-09 Thread yong sheng gong
Public bug reported:

plugin = LinuxBridgeNeutronAgentRPC(interface_mappings,-- agent = 
polling_interval,
root_helper)
LOG.info(_(Agent initialized successfully, now running... ))
plugin.daemon_loop()- agent.daemon_loop()

** Affects: neutron
 Importance: Low
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = yong sheng gong (gongysh)

** Changed in: neutron
   Importance: Undecided = Low

** Changed in: neutron
Milestone: None = icehouse-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1259431

Title:
  plugin variable name should be agent

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  plugin = LinuxBridgeNeutronAgentRPC(interface_mappings,-- agent 
= 
  polling_interval,
  root_helper)
  LOG.info(_(Agent initialized successfully, now running... ))
  plugin.daemon_loop()- agent.daemon_loop()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1259431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258379] Re: vpnservice's router must have gateway interface set

2013-12-08 Thread yong sheng gong
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) = yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258379

Title:
  vpnservice's router must have gateway interface set

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Tempest:
  New

Bug description:
  at line
  
https://github.com/openstack/neutron/blob/master/neutron/services/vpn/service_drivers/ipsec.py#L172

  it is obvious the router must have gateway interface set  then it can
  be used as vpnservce router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258369] [NEW] refactor showcommand and RetrievePoolStats

2013-12-05 Thread yong sheng gong
Public bug reported:

RetrievePoolStats is copying code because the id query and returned data
are using different resource name.

def get_data(self, parsed_args):
self.log.debug('run(%s)' % parsed_args)
neutron_client = self.get_client()
neutron_client.format = parsed_args.request_format
pool_id = neutronV20.find_resourceid_by_name_or_id(
self.get_client(), 'pool', parsed_args.id)
params = {}
if parsed_args.fields:
params = {'fields': parsed_args.fields}

data = neutron_client.retrieve_pool_stats(pool_id, **params)
self.format_output_data(data)
stats = data['stats']
if 'stats' in data:
return zip(*sorted(stats.iteritems()))
else:
return None

** Affects: python-neutronclient
 Importance: Medium
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
 Assignee: (unassigned) = yong sheng gong (gongysh)

** Changed in: python-neutronclient
   Importance: Undecided = Medium

** Changed in: python-neutronclient
Milestone: None = 2.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258369

Title:
  refactor showcommand and RetrievePoolStats

Status in Python client library for Neutron:
  In Progress

Bug description:
  RetrievePoolStats is copying code because the id query and returned
  data are using different resource name.

  def get_data(self, parsed_args):
  self.log.debug('run(%s)' % parsed_args)
  neutron_client = self.get_client()
  neutron_client.format = parsed_args.request_format
  pool_id = neutronV20.find_resourceid_by_name_or_id(
  self.get_client(), 'pool', parsed_args.id)
  params = {}
  if parsed_args.fields:
  params = {'fields': parsed_args.fields}

  data = neutron_client.retrieve_pool_stats(pool_id, **params)
  self.format_output_data(data)
  stats = data['stats']
  if 'stats' in data:
  return zip(*sorted(stats.iteritems()))
  else:
  return None

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1258369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258375] [NEW] only one subnet_id is allowed behind a router for vpnservice object

2013-12-05 Thread yong sheng gong
Public bug reported:

I think we should allow more than subnet_id in one vpnservice object.
but the model below limits only one subnet_id is used.
https://github.com/openstack/neutron/blob/master/neutron/extensions/vpnaas.py
RESOURCE_ATTRIBUTE_MAP = {

'vpnservices': {
'id': {'allow_post': False, 'allow_put': False,
   'validate': {'type:uuid': None},
   'is_visible': True,
   'primary_key': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
  'validate': {'type:string': None},
  'required_by_policy': True,
  'is_visible': True},
'name': {'allow_post': True, 'allow_put': True,
 'validate': {'type:string': None},
 'is_visible': True, 'default': ''},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'subnet_id': {'allow_post': True, 'allow_put': False,
  'validate': {'type:uuid': None},
  'is_visible': True},
'router_id': {'allow_post': True, 'allow_put': False,
  'validate': {'type:uuid': None},
  'is_visible': True},
'admin_state_up': {'allow_post': True, 'allow_put': True,
   'default': True,
   'convert_to': attr.convert_to_boolean,
   'is_visible': True},
'status': {'allow_post': False, 'allow_put': False,
   'is_visible': True}
},

with such limit, I don't think there is a way to allow other subnets
behind the router be vpn exposed!

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258375

Title:
  only one subnet_id is allowed behind a router for vpnservice object

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I think we should allow more than subnet_id in one vpnservice object.
  but the model below limits only one subnet_id is used.
  https://github.com/openstack/neutron/blob/master/neutron/extensions/vpnaas.py
  RESOURCE_ATTRIBUTE_MAP = {

  'vpnservices': {
  'id': {'allow_post': False, 'allow_put': False,
 'validate': {'type:uuid': None},
 'is_visible': True,
 'primary_key': True},
  'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
  'name': {'allow_post': True, 'allow_put': True,
   'validate': {'type:string': None},
   'is_visible': True, 'default': ''},
  'description': {'allow_post': True, 'allow_put': True,
  'validate': {'type:string': None},
  'is_visible': True, 'default': ''},
  'subnet_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'router_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'admin_state_up': {'allow_post': True, 'allow_put': True,
 'default': True,
 'convert_to': attr.convert_to_boolean,
 'is_visible': True},
  'status': {'allow_post': False, 'allow_put': False,
 'is_visible': True}
  },

  with such limit, I don't think there is a way to allow other subnets
  behind the router be vpn exposed!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258379] [NEW] vpnservice's router must have gateway interface set

2013-12-05 Thread yong sheng gong
Public bug reported:

at line
https://github.com/openstack/neutron/blob/master/neutron/services/vpn/service_drivers/ipsec.py#L172

it is obvious the router must have gateway interface set  then it can be
used as vpnservce router.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258379

Title:
  vpnservice's router must have gateway interface set

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  at line
  
https://github.com/openstack/neutron/blob/master/neutron/services/vpn/service_drivers/ipsec.py#L172

  it is obvious the router must have gateway interface set  then it can
  be used as vpnservce router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254929] Re: ML2 plugin: raise SystemExit instead of sys.exit

2013-11-25 Thread yong sheng gong
the sys.exit() raises SystemExit.

** Changed in: neutron
Milestone: icehouse-1 = None

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254929

Title:
  ML2 plugin: raise SystemExit instead of sys.exit

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  In the following modules:
  ./neutron/plugins/ml2/managers.py:70:sys.exit(1)
  ./neutron/plugins/ml2/drivers/type_vlan.py:93:sys.exit(1)

  
  raise SystemExit should be used instead.
  The current code might, if hit by unit tests, will end up killing the tester 
thread without producing any output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244913] Re: L2 population failed to setup tunnel

2013-11-18 Thread yong sheng gong
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244913

Title:
  L2 population failed to setup tunnel

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I found the tunnel port is not created when I enabled the l2
  population.

  I created a two nodes deployment, and If I use 
   [agent]
  tunnel_types = gre
  root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
  l2_population = False

  on both nodes, the tunnel will be created like before,
  but if I use 
   [agent]
  tunnel_types = gre
  root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
  l2_population = True

  there is no tunnel created at all. So My Vm on compute node2 cannot
  get its IP from the both compute and dhcp node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220505] Re: IP will be allocated automate even it is a floating IP

2013-09-04 Thread yong sheng gong
You have to specify the networks on which to boot the VM. If external
network should used or not is very difficult. In H release, I think if
multiple networks are available, the nova boot will fail.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1220505

Title:
  IP will be allocated automate  even it is a floating IP

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I'm working under Centos 6.4 + Grizzly.

  I have created two networks, one for instances private network, and
  another one for public network (for floating IP ).

  Everything works fine. But , if I create an instance without point out the 
private network id, such as:
  nova boot --flavor m1.tiny --image 
c4302a6f-196d-4d3e-be64-c9413e8d1f71  test1

  The instance will be start with both network:
  | d99fd089-5afe-4397-b51b-767485b43383 | test1  | ACTIVE  | 
public=192.168.14.29; private=10.1.0.243  |

  The network works fine, but, I don't want the instance has the public
  IP.

  And, I think because I already assigned this public network to a router, so 
it is clear that it is not an auto assign IP.
  Also, if it can be auto assigned to an instance, it should be a floating IP 
,but not like what it is now.

  Any ideas?

  Thanks.
  -chen

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1220505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1050540] Re: neutron-server requires plugin config at the command line

2013-07-24 Thread yong sheng gong
In fact, the packager can merge the plugin's config and neturon.conf
into one file.

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1050540

Title:
  neutron-server requires plugin config at the command line

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in “quantum” package in Ubuntu:
  Confirmed

Bug description:
  Currently, quantum-server apparently requires plugin config paths to
  passed to the quantum-server binary at launch, along with the path to
  the standard quantum.conf.   This creates an issue for packagers who
  wish to keep quantum-server decoupled from specific plugins.  System
  init scripts need to either:

  - use a specific plugin as a default, and set a dependency between 
quantum-server and that plugin.
  - use mechanisms outside of quantum's configuration for specifying which 
plugin config file is to be used. (symlinks, variables from 
/etc/default/quantum)

  It would be useful if the path to the plugin config(s) to be loaded
  were contained in quantum-server.conf, similar to api_paste_config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1050540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188021] Re: run_test.sh does not work after we use pbr

2013-07-07 Thread yong sheng gong
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1188021

Title:
  run_test.sh does not work after we use pbr

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  /git/quantum$ ./run_tests.sh 
  No virtual environment found...create one? (Y/n) n
  Running ` python setup.py testr --slowest --testr-args='--subunit  '`
  Traceback (most recent call last):
File setup.py, line 18, in module
  from quantum.openstack.common import setup
  ImportError: cannot import name setup

  nova has the same problem.

  I think it is time to remove the run_tests.sh.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1188021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188845] Re: Valid IP addresses cannot be determined if a 'non-root' CIDR is used

2013-06-08 Thread yong sheng gong
I personally don't agree to introduce 10.0.0.254/24 as valid subnet
CIDR.  but we can enhance the subnet CIDR validation so that user cannot
input 10.0.0.254/24 as subnet CIDR

** Changed in: quantum
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1188845

Title:
  Valid IP addresses cannot be determined if a 'non-root' CIDR is used

Status in OpenStack Quantum (virtual network service):
  Opinion

Bug description:
  When the CIDR provided is something like 10.0.0.254/24.  This value is
  syntactically the same as 10.0.0.0/24.  They are the same because the
  /24 indicated that the last 8 digits (it is a 32 digit space - so 32 -
  8 == 24) should be dropped and the first address used.  This notation
  is not common, as the first address is typically used.  However, there
  are scenarios where this type of notation is common.  Example, I have
  an IP Address and the subnet, so I just mesh the two together...

  Currently, Quantum doesn't handle this approach to creating a subnet.
  Instead, it fails:

  In the following manner:

  2013-06-03 06:31:16,364 - quantum.api.v2.resource - ERROR - create failed
  Traceback (most recent call last):
File /usr/lib/python2.6/site-packages/quantum/api/v2/resource.py, line 
85, in resource
  result = method(request=request, **args)
File /usr/lib/python2.6/site-packages/quantum/api/v2/base.py, line 372, 
in create
  obj = obj_creator(request.context, **kwargs)
File /usr/lib/python2.6/site-packages/quantum/db/db_base_plugin_v2.py, 
line 1296, in create_port
  ips = self._allocate_ips_for_port(context, network, port)
File /usr/lib/python2.6/site-packages/quantum/db/db_base_plugin_v2.py, 
line 703, in _allocate_ips_for_port
  p['fixed_ips'])
File /usr/lib/python2.6/site-packages/quantum/db/db_base_plugin_v2.py, 
line 598, in _test_fixed_ips_for_port
  raise q_exc.InvalidInput(error_message=msg)
  InvalidInput: Invalid input for operation: IP address 10.0.0.10 is not a 
valid IP for the defined networks subnets.
  [root@z3-9-5-126-163 quantum]#

  This is because the code in quantum/db/db_base_plugin_v2.py  in
  _check_subnet_ip does the following:

  if (ip != net.network and
  ip != net.broadcast and
  net.netmask  ip == net.ip):
  return True
  return False

  It is assuming that net.ip will always be the root IP address.  I
  believe that this can be resolved by just applying net.netmask to
  net.ip as well as net.netmask.  This approach worked in the various
  combinations I tried.

To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1188845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1187748] Re: Quantum plugin extension can not load

2013-06-06 Thread yong sheng gong
I think one workaround is to use absolute path in api_extensions_path
option.

** Changed in: quantum
   Status: Incomplete = Confirmed

** Changed in: quantum
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1187748

Title:
  Quantum plugin extension can not load

Status in OpenStack Quantum (virtual network service):
  Opinion

Bug description:
  I want to load the NVP plugin, so I configure  api_extensions_path =
  quantum/plugins/nicira/nicira_nvp_plugin/extensions“ in  the file
  quantum.conf.  But the module cannot be loaded because of the
  incorrect path。

  I find the codes in the file ”extensions.py“.

  def get_extensions_path():
  paths = ':'.join(quantum.extensions.__path__)
  
  if cfg.CONF.api_extensions_path:
  paths = ':'.join([ cfg.CONF.api_extensions_path, paths])
 
  return paths

  I think a prefix shoud be added to the cfg.CONF.api_extensions_path,
  such as quantum.__path__.

To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1187748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1187296] Re: periodic keystone query for active instances

2013-06-04 Thread yong sheng gong
I think bug 1177579 https://bugs.launchpad.net/quantum/+bug/1177579
should have fixed it.

https://github.com/openstack/nova/commit/dd9c27f999221001bae9faa03571645824d2a681

** Changed in: quantum
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1187296

Title:
   periodic keystone query for active instances

Status in OpenStack Quantum (virtual network service):
  Invalid

Bug description:
  I'm running based on Grizzly with quantum+OVS+vlan.
  When I have an active instance in the cloud, I noticed about a periodic 
keystone query from the compute node with the active instance. Each time, the 
compute node requests 4 new tokens.
  Anyone know why this happen ?

  I notice some logs in nova-compute.log, but not sure if it is related:

  2013-06-04 15:35:52.242 DEBUG nova.network.quantumv2.api [req-
  abe1ca6d-f282-4ec4-b696-b24006a6367e 8cbad06dd1764bae93612132ca62671d
  45a521413d9b43ff888abb7de9878171] get_instance_nw_info() for aabb
  _get_instance_nw_info /usr/lib/python2.7/dist-
  packages/nova/network/quantumv2/api.py:366

  Thanks.
  -chen

To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1187296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1167683] Re: wrong quantum gateway set

2013-04-10 Thread yong sheng gong
192.168.19.129 is the gateway ip for the pub subnet 
61897e3b-ea17-4848-a2e0-e0847cef4b2e.
This is a default gateway ip since u does not give one when creating this 
subnet.
192.168.19.130 is the ip of qg-0eda5152-09
192.168.19.129 should be one IP somewhere in your physical router.

** Changed in: quantum
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1167683

Title:
  wrong quantum gateway set

Status in OpenStack Quantum (virtual network service):
  Invalid

Bug description:
  I setup quantum grizzly use dhcp- and l3-agent with namespace enabled.
  After I create a router with 2 subnets and 1 public nets, I find that
  vms can not ping Internet hosts.

  I use the following command to create a public sub-network:

  quantum subnet-create public 192.168.19.129/25

  root@controller:~# quantum router-list
  
+--+-++
  | id   | name| external_gateway_info  
|
  
+--+-++
  | 7bde1209-e8ed-4ae6-a627-efaa148c743c | router1 | {network_id: 
5cb82ea6-956c-44bb-adaf-b522f0f0c003} |
  
+--+-++

  root@controller:~# quantum subnet-list
  
+--+--+---+--+
  | id   | name | cidr  | 
allocation_pools |
  
+--+--+---+--+
  | 5d8ba8f6-7339-44a0-95ca-891aab144aeb |  | 200.0.0.0/24  | {start: 
200.0.0.2, end: 200.0.0.254} |
  | 61897e3b-ea17-4848-a2e0-e0847cef4b2e |  | 192.168.19.129/25 | {start: 
192.168.19.130, end: 192.168.19.254} |
  | c11eaa0d-3aff-41a8-909a-1dfdfdf20f48 |  | 100.0.0.0/24  | {start: 
100.0.0.2, end: 100.0.0.254} |
  
+--+--+---+--+

  root@controller:~# quantum net-list
  
+--+++
  | id   | name   | subnets 
   |
  
+--+++
  | 17d31ea4-4473-4da0-9493-9a04fa5aff33 | net1   | 
c11eaa0d-3aff-41a8-909a-1dfdfdf20f48 100.0.0.0/24  |
  | 29b70469-db85-44fa-b5fd-d14d63ac2ef9 | net2   | 
5d8ba8f6-7339-44a0-95ca-891aab144aeb 200.0.0.0/24  |
  | 5cb82ea6-956c-44bb-adaf-b522f0f0c003 | public | 
61897e3b-ea17-4848-a2e0-e0847cef4b2e 192.168.19.129/25 |
  
+--+++

  root@controller:~# ip netns exec qrouter-7bde1209-e8ed-4ae6-a627-efaa148c743c 
ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:45 errors:0 dropped:0 overruns:0 frame:0
TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:4803 (4.8 KB)  TX bytes:4803 (4.8 KB)

  qg-0eda5152-09 Link encap:Ethernet  HWaddr fa:16:3e:7a:a1:9f  
inet addr:192.168.19.130  Bcast:192.168.19.255  Mask:255.255.255.128
inet6 addr: fe80::f816:3eff:fe7a:a19f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:8191 errors:0 dropped:0 overruns:0 frame:0
TX packets:6750 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:935028 (935.0 KB)  TX bytes:283716 (283.7 KB)

  qr-8af2e01f-bb Link encap:Ethernet  HWaddr fa:16:3e:f7:3d:5e  
inet addr:100.0.0.1  Bcast:100.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fef7:3d5e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:8541 errors:0 dropped:0 overruns:0 frame:0
TX packets:7563 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:1034980 (1.0 MB)  TX bytes:904654 (904.6 KB)

  qr-9b9a3229-19 Link encap:Ethernet  HWaddr fa:16:3e:66:13:08  
inet addr:200.0.0.1  Bcast:200.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe66:1308/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  

[Yahoo-eng-team] [Bug 1163496] Re: Quantum end point URL has v2.0 missing.

2013-04-02 Thread yong sheng gong
This is by design.
client can choose version to use and then append v2.0 after this URL.

** Changed in: quantum
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1163496

Title:
  Quantum end point URL has v2.0 missing.

Status in OpenStack Quantum (virtual network service):
  Invalid

Bug description:
  When i get urls for all Openstack api's ike Quantum, Nova etc 
  Quantum url is in this format

  http://serveraddress:9696/

  it doesnt have the v2.0 appended in it. I have to append v2.0 to make
  the call work. This is not desirable since the version could change in
  the future. I dont know if this is a bug or feature to host mutiple
  versions of same service ?

  
  Ritesh

To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1163496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp