[Yahoo-eng-team] [Bug 1702048] Re: neutron update network with no-qos-policy, but it not take effort at ovs-agent

2017-07-05 Thread shihanzhang
this bug has be resolved in master branch and backport to ocata stable
branch, https://bugs.launchpad.net/neutron/+bug/1649503

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1702048

Title:
  neutron update network with no-qos-policy, but it not take effort at
  ovs-agent

Status in neutron:
  Invalid

Bug description:
  My test environment is ocata. When I detached a network with qos-
  policy, I found that the qos ratelimit rule still exist at ovs. I have
  analysed the code of update_network in ml2 and found that neutron-
  server would notify ovs-agent when need_network_update_notify is true.
  The value of need_network_update_notify depends on the attribute of
  qos_policy_id in updated_network variable. The attribute value in
  "updated_network"  is based on the qos_policy_binding of network
  model.

  The update_network function would call "process_update_network"
  function to detach relationship between qos-policy and network, then
  call "get_network" to create a new dict of network info. But I found
  the "qos_policy_binding" still exist in network model. That means the
  detach operation didn't execute completely before the creation of
  "updated_network".

  I know the bug is difficult to recur, but it is real.
  How can I resolve it ?

  Here is the code:

  neutron/plugins/ml2/plugin.py
  def update_network(self, context, id, network):
  net_data = network[attributes.NETWORK]
  provider._raise_if_updates_provider_attributes(net_data)
    session = context.session
    with session.begin(subtransactions=True):
    original_network = super(Ml2Plugin, self).get_network(context, id)
  updated_network = super(Ml2Plugin,self).update_network(context,id,

 network)
  # detach relationship between qos-policy and network
  self.extension_manager.process_update_network(context, net_data,
   updated_network)
  self._process_l3_update(context, updated_network, net_data)
  self.type_manager.extend_network_dict_provider(context,
     updated_network)
  # get_network will check qos_policy_binding in network model
  updated_network = self.get_network(context, id)
  need_network_update_notify = (
  qos_consts.QOS_POLICY_ID in net_data and
    original_network[qos_consts.QOS_POLICY_ID] !=
    updated_network[qos_consts.QOS_POLICY_ID])

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1702048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699938] [NEW] DB deadlock when bulk delete subnet

2017-06-22 Thread shihanzhang
Public bug reported:

My environment is stable/newton, I use rally to do stability tests, the details 
is below:
1. config a network with 2 dhcp-agent
2. config rally to do 40 bulk operation, each process is:create network->create 
subnet->delete subnet->delete network

I total do 100 times, the success rate is 1.6%, i check the neutron-server log 
with many db deadlock erros, there were
5 tables happen 
deadlock:ipamallocations,ipamsubnets,standardattributes,ports,provisioningblocks.
 
the neutron-server log is http://paste.openstack.org/show/613359/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699938

Title:
  DB deadlock when bulk delete subnet

Status in neutron:
  New

Bug description:
  My environment is stable/newton, I use rally to do stability tests, the 
details is below:
  1. config a network with 2 dhcp-agent
  2. config rally to do 40 bulk operation, each process is:create 
network->create subnet->delete subnet->delete network

  I total do 100 times, the success rate is 1.6%, i check the neutron-server 
log with many db deadlock erros, there were
  5 tables happen 
deadlock:ipamallocations,ipamsubnets,standardattributes,ports,provisioningblocks.
 
  the neutron-server log is http://paste.openstack.org/show/613359/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1699938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681055] [NEW] delete network failed with ERROR ‘ DBError: Can't reconnect until invalid transaction is rolled back’

2017-04-08 Thread shihanzhang
Public bug reported:

My env is Newton branch, i got a error when i delete a network, the
error log is below:

delete failed: Exception auto-deleting port 4213e9ef-2ea1-43f4-b0d4-ff54fce9031d
 Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 79, 
in resource
 result = method(request=request, **args)
   File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 555, in 
delete
 return self._delete(request, id, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in 
wrapped
 setattr(e, '_RETRY_EXCEEDED', True)
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in 
wrapped
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
 ectxt.value = e.inner_exc
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in 
wrapped
 traceback.format_exc())
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 119, in 
wrapped
 return f(*dup_args, **dup_kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 577, in 
_delete
 obj_deleter(request.context, id, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 756, 
in inner
 return f(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 159, in 
wrapped
 return method(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in 
wrapped
 setattr(e, '_RETRY_EXCEEDED', True)
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in 
wrapped
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
 ectxt.value = e.inner_exc
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in 
wrapped
 traceback.format_exc())
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 119, in 
wrapped
 return f(*dup_args, **dup_kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 
972, in delete_network
 self._delete_ports(context, port_ids)
   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 
878, in _delete_ports
 _LE("Exception auto-deleting port %s"), port_id)
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 
869, in _delete_ports
 self.delete_port(context, port_id)
   File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 756, 
in inner
 return f(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 159, in 
wrapped
 return method(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in 
wrapped
 setattr(e, '_RETRY_EXCEEDED', True)
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py

[Yahoo-eng-team] [Bug 1653633] [NEW] fwaas v1 with DVR: l3 agent can't restore the NAT rules for floatingIP

2017-01-03 Thread shihanzhang
Public bug reported:

With neutron and FWaas master branch and in DVR mode, I create a VM with
floatingIP, it works ok, but when i restart the related l3-agent, i
can't reach the VM via FloatingIP, then check the NAT rules in router
namespace, i found the NAT rules did not be restored.

reproduce steps:
1. in DVR mode
2. create a VM with floatingIP
3. restart the related l3 agent on compute node
we can check that the NAT rule for floatingIP did not be restored.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dvr

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653633

Title:
  fwaas v1 with DVR: l3 agent can't restore the NAT rules for floatingIP

Status in neutron:
  New

Bug description:
  With neutron and FWaas master branch and in DVR mode, I create a VM
  with floatingIP, it works ok, but when i restart the related l3-agent,
  i can't reach the VM via FloatingIP, then check the NAT rules in
  router namespace, i found the NAT rules did not be restored.

  reproduce steps:
  1. in DVR mode
  2. create a VM with floatingIP
  3. restart the related l3 agent on compute node
  we can check that the NAT rule for floatingIP did not be restored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1653633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525824] Re: [RFE] Add a 'promiscuous mode' extension for ports

2016-09-28 Thread shihanzhang
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525824

Title:
  [RFE] Add a 'promiscuous mode' extension  for ports

Status in neutron:
  New

Bug description:
  Now the VM's VNIC with neutron port is in a promiscuous mode,
  sometimes it will affect the application performance if there are too
  many traffic, some hypervisor like Huawei FusionSphere or VMware can
  set VNIC promiscuous mode, so this proposal will add a new extension
  for port promiscuous mode, it is like port_security extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627976] [NEW] vlan-trunk: add_subports happen KeyError

2016-09-27 Thread shihanzhang
Public bug reported:

reproduce steps:
1. create a trunk
2. add a subport to this trunk with wrong body like
{
"sub_port":{
"name":"my-trunk",
"port_id":"4cd8f65c-f1b1-4186-a627-6a6fdefd916e"
}
}

the error log is below:

Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
result = method(request=request, **args)
  File "/opt/stack/neutron/neutron/db/api.py", line 88, in wrapped
setattr(e, '_RETRY_EXCEEDED', True)
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
self.force_reraise()
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/neutron/neutron/db/api.py", line 84, in wrapped
return f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in 
wrapper
ectxt.value = e.inner_exc
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
self.force_reraise()
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in 
wrapper
return f(*args, **kwargs)
  File "/opt/stack/neutron/neutron/db/api.py", line 124, in wrapped
traceback.format_exc())
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
self.force_reraise()
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/neutron/neutron/db/api.py", line 119, in wrapped
return f(*dup_args, **dup_kwargs)
  File "/opt/stack/neutron/neutron/api/v2/base.py", line 250, in _handle_action
ret_value = getattr(self._plugin, name)(*arg_list, **kwargs)
  File "/opt/stack/neutron/neutron/db/db_base_plugin_common.py", line 40, in 
inner
result = f(*args, **kwargs)
  File "/opt/stack/neutron/neutron/services/trunk/plugin.py", line 281, in 
add_subports
subports = subports['sub_ports']
KeyError: 'sub_ports'

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627976

Title:
  vlan-trunk:  add_subports happen KeyError

Status in neutron:
  New

Bug description:
  reproduce steps:
  1. create a trunk
  2. add a subport to this trunk with wrong body like
  {
  "sub_port":{
  "name":"my-trunk",
  "port_id":"4cd8f65c-f1b1-4186-a627-6a6fdefd916e"
  }
  }

  the error log is below:

  Traceback (most recent call last):
File "/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
  result = method(request=request, **args)
File "/opt/stack/neutron/neutron/db/api.py", line 88, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
  self.force_reraise()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/neutron/neutron/db/api.py", line 84, in wrapped
  return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in 
wrapper
  ectxt.value = e.inner_exc
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
  self.force_reraise()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in 
wrapper
  return f(*args, **kwargs)
File "/opt/stack/neutron/neutron/db/api.py", line 124, in wrapped
  traceback.format_exc())
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
  self.force_reraise()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/neutron/neutron/db/api.py", line 119, in w

[Yahoo-eng-team] [Bug 1608347] [NEW] Neutron-server can't clean obsolete tunnel info

2016-07-31 Thread shihanzhang
Public bug reported:

Now for tunnel network like vxlan or gre in neutron, if we change a
compute node tunnel IP, the obsolete tunnel info was still saved in
neutron-server, and other compute nodes still established tunnel with
the obsolete IP, I think we should provide a approach to clean the
obsolete tunnel info, like ovs_cleanup.py.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Description changed:

- Now for tunnel network like vxlan or gre in neutron, if we change a compute 
node tunnel IP, the obsolete tunnel info was 
- still saved in neutron-server, and other compute nodes still established 
tunnel with the obsolete IP, I think we should provide a approach to clean the 
obsolete tunnel info, like ovs_cleanup.py.
+ Now for tunnel network like vxlan or gre in neutron, if we change a
+ compute node tunnel IP, the obsolete tunnel info was still saved in
+ neutron-server, and other compute nodes still established tunnel with
+ the obsolete IP, I think we should provide a approach to clean the
+ obsolete tunnel info, like ovs_cleanup.py.

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608347

Title:
  Neutron-server can't clean obsolete tunnel info

Status in neutron:
  New

Bug description:
  Now for tunnel network like vxlan or gre in neutron, if we change a
  compute node tunnel IP, the obsolete tunnel info was still saved in
  neutron-server, and other compute nodes still established tunnel with
  the obsolete IP, I think we should provide a approach to clean the
  obsolete tunnel info, like ovs_cleanup.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603292] [NEW] Neutron network tags should not be empty string

2016-07-14 Thread shihanzhang
Public bug reported:

Now neutron network tags can be empty string, but I think there is no
use case for a empty string tag, so we should add a check for tags.

root@server201:~# neutron tag-add --resource-type network --resource test --tag 
'test_tag'
root@server201:~# neutron tag-add --resource-type network --resource test --tag 
'   '
root@server201:~# neutron net-show test
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2016-07-15T01:45:51  |
| description   |  |
| id| f1060382-c7fa-43d5-a214-e8525184e7f0 |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | test |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 26   |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tags  |  |
|   | test_tag |
| tenant_id | 9e211e5ad3c0407aaf6c5803dc307c27 |
| updated_at| 2016-07-15T01:45:51  |
+---+--+

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603292

Title:
  Neutron network tags should not be empty string

Status in neutron:
  New

Bug description:
  Now neutron network tags can be empty string, but I think there is no
  use case for a empty string tag, so we should add a check for tags.

  root@server201:~# neutron tag-add --resource-type network --resource test 
--tag 'test_tag'
  root@server201:~# neutron tag-add --resource-type network --resource test 
--tag '   '
  root@server201:~# neutron net-show test
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-07-15T01:45:51  |
  | description   |  |
  | id| f1060382-c7fa-43d5-a214-e8525184e7f0 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | test |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 26   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  |   | test_tag |
  | tenant_id | 9e211e5ad3c0407aaf6c5803dc307c27 |
  | updated_at| 2016-07-15T01:45:51  |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603292/+subscriptio

[Yahoo-eng-team] [Bug 1602158] [NEW] SRIOV: MAC conflict between two VFs in one PF

2016-07-12 Thread shihanzhang
Public bug reported:

For a neutron port, now MAC is unique in a network, that to say, two
ports in different network may have same MAC, but for SRIOV port, if two
VFs in one PF which belong to different neutron networks have same MAC,
when we use these two port to create VMs, libvirt will raise ERROR.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602158

Title:
  SRIOV: MAC conflict between two VFs in one PF

Status in neutron:
  New

Bug description:
  For a neutron port, now MAC is unique in a network, that to say, two
  ports in different network may have same MAC, but for SRIOV port, if
  two VFs in one PF which belong to different neutron networks have same
  MAC, when we use these two port to create VMs, libvirt will raise
  ERROR.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1602158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540748] [NEW] ml2: port_update and port_delete should not use faout notify

2016-02-01 Thread shihanzhang
Public bug reported:

Now for ml2 plugin,  neutron-server use faout RPC  message for port_update and 
port_delete, the codes as below:
def port_update(self, context, port, network_type, segmentation_id, 
 physical_network):
cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
cctxt.cast(context, 'port_update', port=port,
   network_type=network_type,
   segmentation_id=segmentation_id,
   physical_network=physical_network)

def port_delete(self, context, port_id):
cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
cctxt.cast(context, 'port_delete', port_id=port_id)

I think neutron-server should directly sends the RPC message to port's
binding_host, this can offload work for AMQP

** Affects: neutron
 Importance: Undecided
     Assignee: shihanzhang (shihanzhang)
 Status: New

** Description changed:

  Now for ml2 plugin,  neutron-server use faout RPC  message for port_update 
and port_delete, the codes as below:
- def port_update(self, context, port, network_type, segmentation_id,
- physical_network):
- cctxt = self.client.prepare(topic=self.topic_port_update,
- fanout=True)
- cctxt.cast(context, 'port_update', port=port,
-network_type=network_type, segmentation_id=segmentation_id,
-physical_network=physical_network)
+ def port_update(self, context, port, network_type, segmentation_id, 
physical_network):
+ cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
+ cctxt.cast(context, 'port_update', port=port,
+    network_type=network_type,
+   segmentation_id=segmentation_id,
+    physical_network=physical_network)
  
- def port_delete(self, context, port_id):
- cctxt = self.client.prepare(topic=self.topic_port_delete,
- fanout=True)
- cctxt.cast(context, 'port_delete', port_id=port_id)
+ def port_delete(self, context, port_id):
+ cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
+ cctxt.cast(context, 'port_delete', port_id=port_id)
  
  I think neutron-server should directly sends the RPC message to port's
  binding_host, this can offload work for AMQP

** Description changed:

  Now for ml2 plugin,  neutron-server use faout RPC  message for port_update 
and port_delete, the codes as below:
- def port_update(self, context, port, network_type, segmentation_id, 
physical_network):
+ def port_update(self, context, port, network_type, segmentation_id,   
   physical_network):
  cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
  cctxt.cast(context, 'port_update', port=port,
     network_type=network_type,
-   segmentation_id=segmentation_id,
+    segmentation_id=segmentation_id,
     physical_network=physical_network)
  
  def port_delete(self, context, port_id):
  cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
  cctxt.cast(context, 'port_delete', port_id=port_id)
  
  I think neutron-server should directly sends the RPC message to port's
  binding_host, this can offload work for AMQP

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540748

Title:
  ml2: port_update and port_delete should not use faout notify

Status in neutron:
  New

Bug description:
  Now for ml2 plugin,  neutron-server use faout RPC  message for port_update 
and port_delete, the codes as below:
  def port_update(self, context, port, network_type, segmentation_id,   
   physical_network):
  cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
  cctxt.cast(context, 'port_update', port=port,
     network_type=network_type,
     segmentation_id=segmentation_id,
     physical_network=physical_network)

  def port_delete(self, context, port_id):
  cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
  cctxt.cast(context, 'port_delete', port_id=port_id)

  I think neutron-server should directly sends the RPC message to port's
  binding_host, this can offload work for AMQP

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539850] [NEW] ml2: network's attribute multiprovidernet should not be updated

2016-01-29 Thread shihanzhang
Public bug reported:

For ml2 plugin,  it  does not support updating provider attributes, but
it does check multiprovidernet  attribute "segments",  I think we should
add a check for multiprovidernet

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

** Description changed:

  For ml2 plugin,  it  does not support updating provider attributes, but
  it does check multiprovidernet  attribute "segments",  I think we should
- add a check for multiprovidernet !
+ add a check for multiprovidernet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1539850

Title:
  ml2:  network's attribute multiprovidernet should not be updated

Status in neutron:
  New

Bug description:
  For ml2 plugin,  it  does not support updating provider attributes,
  but it does check multiprovidernet  attribute "segments",  I think we
  should add a check for multiprovidernet

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1539850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525824] [NEW] RFE: enable or disable port's promiscuous mode

2015-12-14 Thread shihanzhang
Public bug reported:

Now the VM's VNIC with neutron port is in a promiscuous mode, sometimes
it will affect the application performance if there are too many
traffic, some hypervisor like Huawei FusionSphere or VMware can set VNIC
promiscuous mode, so this proposal will add a Qos rule to control port's
promiscuous mode.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

** Tags added: rfe

** Summary changed:

- enable or disable port's promiscuous mode
+ RFE: enable or disable port's promiscuous mode

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525824

Title:
  RFE: enable or disable port's promiscuous mode

Status in neutron:
  New

Bug description:
  Now the VM's VNIC with neutron port is in a promiscuous mode,
  sometimes it will affect the application performance if there are too
  many traffic, some hypervisor like Huawei FusionSphere or VMware can
  set VNIC promiscuous mode, so this proposal will add a Qos rule to
  control port's promiscuous mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521909] [NEW] subnet_allocation extention is no need

2015-12-02 Thread shihanzhang
Public bug reported:

subnetpool is a core resource, why is there a extension
"subnet_allocation", for ml2 plugin, in it's
_supported_extension_aliases

_supported_extension_aliases = ["provider", "external-net", "binding",
   "quotas", "security-group", "agent",
   "dhcp_agent_schedler",
   "multi-provider", "allowed-address-pairs",
   "extra_dhcp_opt", "subnet_allocation",
   "net-mtu", "vlan-transparent",
   "address-scope", "dns-integration",
   "availability_zone",
   "network_availability_zone"]
if we delete subnet_allocation from _supported_extension_aliases, we also can 
create subnetpool and create subnet with subnetpool-id

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Description changed:

  subnetpool is a core resource, why is there a extension
  "subnet_allocation", for ml2 plugin, in it's
  _supported_extension_aliases
  
- _supported_extension_aliases = ["provider", "external-net", "binding",
- "quotas", "security-group", "agent",
- "dhcp_agent_scheduler",
- "multi-provider", "allowed-address-pairs",
- "extra_dhcp_opt", "subnet_allocation",
- "net-mtu", "vlan-transparent",
- "address-scope", "dns-integration",
- "availability_zone",
- "network_availability_zone"]
+ _supported_extension_aliases = ["provider", "external-net", "binding",
+    "quotas", "security-group", "agent",
+    "dhcp_agent_schedler",
+    "multi-provider", "allowed-address-pairs",
+        "extra_dhcp_opt", "subnet_allocation",
+    "net-mtu", "vlan-transparent",
+    "address-scope", "dns-integration",
+    "availability_zone",
+    "network_availability_zone"]
  if we delete subnet_allocation from _supported_extension_aliases, we also can 
create subnetpool and create subnet with subnetpool-id

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521909

Title:
  subnet_allocation extention is no need

Status in neutron:
  New

Bug description:
  subnetpool is a core resource, why is there a extension
  "subnet_allocation", for ml2 plugin, in it's
  _supported_extension_aliases

  _supported_extension_aliases = ["provider", "external-net", "binding",
     "quotas", "security-group", "agent",
     "dhcp_agent_schedler",
     "multi-provider", "allowed-address-pairs",
     "extra_dhcp_opt", "subnet_allocation",
     "net-mtu", "vlan-transparent",
     "address-scope", "dns-integration",
     "availability_zone",
     "network_availability_zone"]
  if we delete subnet_allocation from _supported_extension_aliases, we also can 
create subnetpool and create subnet with subnetpool-id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517331] [NEW] RBAC: create RBAC policy without target-tenant will raise internal error

2015-11-17 Thread shihanzhang
Public bug reported:

I create a RBAC policy without target-tenant will raise internal error:
neutron rbac-create --type network --action access_as_shared  test_net

the error log as bellow:
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1010, in _execute_clauseelement
  compiled_sql, distilled_params
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1146, in _execute_context
  context)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1337, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1139, in _execute_context
  context)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", 
line 450, in do_execute
  cursor.execute(statement, parameters)
File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 146, in 
execute
  result = self._query(query)
File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 296, in 
_query
  conn.query(q)
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 781, 
in query
  self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 942, 
in _read_query_result
  result.read()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 
1138, in read
  first_packet = self.connection._read_packet()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 906, 
in _read_packet
  packet.check_error()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 367, 
in check_error
  err.raise_mysql_exception(self._data)
File "/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 120, in 
raise_mysql_exception
  _check_mysql_exception(errinfo)
File "/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 112, in 
_check_mysql_exception
  raise errorclass(errno, errorvalue)
DBError: (pymysql.err.IntegrityError) (1048, u"Column 'target_tenant' cannot be 
null") [SQL: u'INSERT INTO networkrbacs (tenant_id, id, target_tenant, action, 
object_id) VALUES (%s, %s, %s, %s, %s)'] [parameters: 
(u'22f8728a81dc40f4af03b6bda8fb384f', '162f3c45-cf2a-4e98-9d8f-4e1fb418ccc0', 
None, u'access_as_shared', u'2eea4cc4-a7a7-4a3e-bde5-f3cb8dd1aad4')]

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517331

Title:
  RBAC: create RBAC policy without target-tenant will raise internal
  error

Status in neutron:
  New

Bug description:
  I create a RBAC policy without target-tenant will raise internal error:
  neutron rbac-create --type network --action access_as_shared  test_net

  the error log as bellow:
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1010, in _execute_clauseelement
compiled_sql, distilled_params
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1146, in _execute_context
context)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1139, in _execute_context
context)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", 
line 450, in do_execute
cursor.execute(statement, parameters)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 146, 
in execute
result = self._query(query)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 296, 
in _query
conn.query(q)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 
781, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 
942, in _read_query_result
result.read()
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 
1138, in read

[Yahoo-eng-team] [Bug 1513765] [NEW] bulk delete ports cost ovs-agent much time

2015-11-06 Thread shihanzhang
Public bug reported:

this problem was found in master branch, but I think it also affect liberty.
reproduce steps:
1. create 100 VMs in default security group
2. bulk delete these VMs
I found the ipset can't be clear as soon as possible, because there were much 
ip_conntrack need to be clean, so the ovs-agent were doing this work.
For this problem, what I can think is letting ovs-agent use eventlet.GreenPool 
to delete ip_conntrack, do other have good idea?

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513765

Title:
  bulk delete ports cost ovs-agent much time

Status in neutron:
  New

Bug description:
  this problem was found in master branch, but I think it also affect liberty.
  reproduce steps:
  1. create 100 VMs in default security group
  2. bulk delete these VMs
  I found the ipset can't be clear as soon as possible, because there were much 
ip_conntrack need to be clean, so the ovs-agent were doing this work.
  For this problem, what I can think is letting ovs-agent use 
eventlet.GreenPool to delete ip_conntrack, do other have good idea?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511925] [NEW] Update subnet allocation-pool failed

2015-10-31 Thread shihanzhang
Public bug reported:

this bug happens on master branch, reproduce steps:
1. create a subnet with no-gateway, etc, neutron subnet-create --no-gateway 
test 30.30.30.0/24
2. update this subnet allocation-pool, etc, neutron subnet-update 
--allocation-pool start=30.30.30.5,end=30.30.30.6 
bb956363-cf67-4689-a3a7-08d179a9ea3e

I get a error as bellow:
failed to detect a valid IP address from None

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511925

Title:
  Update subnet allocation-pool failed

Status in neutron:
  New

Bug description:
  this bug happens on master branch, reproduce steps:
  1. create a subnet with no-gateway, etc, neutron subnet-create --no-gateway 
test 30.30.30.0/24
  2. update this subnet allocation-pool, etc, neutron subnet-update 
--allocation-pool start=30.30.30.5,end=30.30.30.6 
bb956363-cf67-4689-a3a7-08d179a9ea3e

  I get a error as bellow:
  failed to detect a valid IP address from None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497522] [NEW] DHCP agent fail if create a D type subnet

2015-09-19 Thread shihanzhang
Public bug reported:

When we create a subnet,  neutron-server just check the subnet validation, and 
permit to create a D type subnet, 
for example: neutron subnet-create dhcp-test 224.0.0.0/8, but dhcp-agent will 
fail, the error log as bellow:
[-] Unable to enable dhcp for c07785a5-aa25-4939-b74f-481c1158ebcd.
Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
getattr(driver, action)(**action_kwargs)
  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 203, in enable
interface_name = self.device_manager.setup(self.network)
  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1212, in setup
self._set_default_route(network, interface_name)
  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1015, in 
_set_default_route
device.route.add_gateway(subnet.gateway_ip)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 584, in 
add_gateway
self._as_root([ip_version], tuple(args))
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 280, in _as_root
use_root_namespace=use_root_namespace)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 80, in _as_root
log_fail_as_error=self.log_fail_as_error)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 89, in _execute
log_fail_as_error=log_fail_as_error)
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 160, in execute
raise RuntimeError(m)
RuntimeError: 
Command: ['ip', 'netns', 'exec', u'qdhcp-c07785a5-aa25-4939-b74f-481c1158ebcd', 
'ip', '-4', 'route', 'replace', 'default', 'via', u'224.0.0.1',

Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Network is unreachable

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497522

Title:
  DHCP agent fail if create a D type subnet

Status in neutron:
  New

Bug description:
  When we create a subnet,  neutron-server just check the subnet validation, 
and permit to create a D type subnet, 
  for example: neutron subnet-create dhcp-test 224.0.0.0/8, but dhcp-agent will 
fail, the error log as bellow:
  [-] Unable to enable dhcp for c07785a5-aa25-4939-b74f-481c1158ebcd.
  Traceback (most recent call last):
File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
  getattr(driver, action)(**action_kwargs)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 203, in enable
  interface_name = self.device_manager.setup(self.network)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1212, in setup
  self._set_default_route(network, interface_name)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1015, in 
_set_default_route
  device.route.add_gateway(subnet.gateway_ip)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 584, in 
add_gateway
  self._as_root([ip_version], tuple(args))
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 280, in 
_as_root
  use_root_namespace=use_root_namespace)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 80, in 
_as_root
  log_fail_as_error=self.log_fail_as_error)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 89, in 
_execute
  log_fail_as_error=log_fail_as_error)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 160, in execute
  raise RuntimeError(m)
  RuntimeError: 
  Command: ['ip', 'netns', 'exec', 
u'qdhcp-c07785a5-aa25-4939-b74f-481c1158ebcd', 'ip', '-4', 'route', 'replace', 
'default', 'via', u'224.0.0.1',

  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Network is unreachable

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497074] [NEW] Ignore the ERROR when delete a ipset member

2015-09-17 Thread shihanzhang
Public bug reported:

Now when ovs/lb agent create a ipset sets, it already use '-exist' option, I 
think deleting a ipset member also need this  option,
the option '-exist' in http://ipset.netfilter.org/ipset.man.html  described as 
bellow:

-!, -exist
Ignore errors when exactly the same set is to be created or already added entry 
is added or missing entry is deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497074

Title:
  Ignore the ERROR when delete a ipset member

Status in neutron:
  New

Bug description:
  Now when ovs/lb agent create a ipset sets, it already use '-exist' option, I 
think deleting a ipset member also need this  option,
  the option '-exist' in http://ipset.netfilter.org/ipset.man.html  described 
as bellow:

  -!, -exist
  Ignore errors when exactly the same set is to be created or already added 
entry is added or missing entry is deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496204] [NEW] DVR: no need to reschedule_router if router gateway update

2015-09-15 Thread shihanzhang
Public bug reported:

With None DVR router, if router_gateway changes, it should
reschedule_router to proper l3 agents, the reason is bellow:

"  When external_network_bridge is set, each L3 agent can be associated
with at most one external network. If router's new external gateway
is on other network then the router needs to be rescheduled to the
proper l3 agent."

But with DVR router, I think it is no need to reschedule_router(there is
no other  l3 agents), and a serious problem is that during
reschedule_router, the communication is broken related to this router.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496204

Title:
  DVR: no need to reschedule_router if router gateway update

Status in neutron:
  New

Bug description:
  With None DVR router, if router_gateway changes, it should
  reschedule_router to proper l3 agents, the reason is bellow:

  "  When external_network_bridge is set, each L3 agent can be associated
  with at most one external network. If router's new external gateway
  is on other network then the router needs to be rescheduled to the
  proper l3 agent."

  But with DVR router, I think it is no need to reschedule_router(there
  is no other  l3 agents), and a serious problem is that during
  reschedule_router, the communication is broken related to this router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496201] [NEW] DVR: router namespace can't be deleted if bulk delete VMs

2015-09-15 Thread shihanzhang
Public bug reported:

With DVR router,  if we bulk delete VMs on from a compute node, the router 
namespace will remain(not always happen, but for most part)
reproduce steps:
1. create a DVR router,  add a subnet to this router
2. create two VMs on one compute node, note that these are only these two VMs 
on this compute 
3. bulk delete these two VMs through Nova API

the router namespace will remain for the most part.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496201

Title:
  DVR: router namespace can't be deleted if bulk delete VMs

Status in neutron:
  New

Bug description:
  With DVR router,  if we bulk delete VMs on from a compute node, the router 
namespace will remain(not always happen, but for most part)
  reproduce steps:
  1. create a DVR router,  add a subnet to this router
  2. create two VMs on one compute node, note that these are only these two VMs 
on this compute 
  3. bulk delete these two VMs through Nova API

  the router namespace will remain for the most part.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493653] [NEW] DVR: port with None binding:host_id can't be deleted

2015-09-08 Thread shihanzhang
Public bug reported:

On Neutron master branch,  in bellow use case,  a port can't be deleted
1. create a DVR router
2. create a network, a subnet which disable dhcp
3. create a port with device_owner=compute:None

when we delete this port,  we will get a error:
root@compute:/var/log/neutron# neutron port-delete 
830d6db6-cd00-46ff-8f17-f32f363de1fd
Agent with agent_type=L3 agent and host= could not be found

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493653

Title:
  DVR: port with None binding:host_id can't be deleted

Status in neutron:
  New

Bug description:
  On Neutron master branch,  in bellow use case,  a port can't be deleted
  1. create a DVR router
  2. create a network, a subnet which disable dhcp
  3. create a port with device_owner=compute:None

  when we delete this port,  we will get a error:
  root@compute:/var/log/neutron# neutron port-delete 
830d6db6-cd00-46ff-8f17-f32f363de1fd
  Agent with agent_type=L3 agent and host= could not be found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493341] [NEW] l2 pop failed if live-migrate a VM with multiple neutron-server workers

2015-09-08 Thread shihanzhang
Public bug reported:

Now if we set neutron-server with 2 more workers or two neutron-server node 
behind a loadbalancer, then we live-migrate a VM will 
cause l2 pop failed(not always), the reason is that:
1. when nova finish live-migrating a VM, it update port host id to destination 
host
2. one neutron-server worker receive this request and do l2 pop, it check this 
port's host id was changed, but status is ACTIVE, then it
   record this port to its memory
3. when l2 agent scans this port, and update this port's status from 
ACTIVE->BUILD-ACTIVE, but another neutron-server workerreceive this RPC 
request, then l2 pop will fail for this port 


def update_port_postcommit(self, context):
...
if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
if context.status == const.PORT_STATUS_ACTIVE:
self._update_port_up(context)
if context.status == const.PORT_STATUS_DOWN:
agent_host = context.host
fdb_entries = self._get_agent_fdb(
context, port, agent_host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
elif (context.host != context.original_host
and context.status == const.PORT_STATUS_ACTIVE
and not self.migrated_ports.get(orig['id'])):
# The port has been migrated. We have to store the original
# binding to send appropriate fdb once the port will be set
# on the destination host
self.migrated_ports[orig['id']] = (
(orig, context.original_host))

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493341

Title:
  l2 pop failed if live-migrate a VM with multiple neutron-server
  workers

Status in neutron:
  New

Bug description:
  Now if we set neutron-server with 2 more workers or two neutron-server node 
behind a loadbalancer, then we live-migrate a VM will 
  cause l2 pop failed(not always), the reason is that:
  1. when nova finish live-migrating a VM, it update port host id to 
destination host
  2. one neutron-server worker receive this request and do l2 pop, it check 
this port's host id was changed, but status is ACTIVE, then it
 record this port to its memory
  3. when l2 agent scans this port, and update this port's status from 
ACTIVE->BUILD-ACTIVE, but another neutron-server workerreceive this RPC 
request, then l2 pop will fail for this port 

  
  def update_port_postcommit(self, context):
  ...
  if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
  if context.status == const.PORT_STATUS_ACTIVE:
  self._update_port_up(context)
  if context.status == const.PORT_STATUS_DOWN:
  agent_host = context.host
  fdb_entries = self._get_agent_fdb(
  context, port, agent_host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)
  elif (context.host != context.original_host
  and context.status == const.PORT_STATUS_ACTIVE
  and not self.migrated_ports.get(orig['id'])):
  # The port has been migrated. We have to store the original
  # binding to send appropriate fdb once the port will be set
  # on the destination host
  self.migrated_ports[orig['id']] = (
  (orig, context.original_host))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487978] [NEW] Performance of L2 population

2015-08-23 Thread shihanzhang
Public bug reported:

when a compute node restarts, all ports on this host will trigger l2 pop
again, even if these ports were not change, if there are many compute
nodes restarting at the same time,  the l2 pop will be much consumable,
I think if l2 agent restarts, it should not trigger created port do l2
pop.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487978

Title:
  Performance of L2 population

Status in neutron:
  New

Bug description:
  when a compute node restarts, all ports on this host will trigger l2
  pop again, even if these ports were not change, if there are many
  compute nodes restarting at the same time,  the l2 pop will be much
  consumable, I think if l2 agent restarts, it should not trigger
  created port do l2 pop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483958] [NEW] the router initial status should not be ACTIVE

2015-08-11 Thread shihanzhang
Public bug reported:

Now we create a router, its initial status is ACTIVE,  but I think its
initial status should not be 'ACTIVE' before this  router binds to a l3
agent,  I think it is better to change its initial status  to
'PENDING_CREATE',  such like FWaas, VPNaas.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483958

Title:
  the router initial status should not be ACTIVE

Status in neutron:
  New

Bug description:
  Now we create a router, its initial status is ACTIVE,  but I think its
  initial status should not be 'ACTIVE' before this  router binds to a
  l3 agent,  I think it is better to change its initial status  to
  'PENDING_CREATE',  such like FWaas, VPNaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483601] [NEW] l2 population failed when bulk live migrate VMs

2015-08-11 Thread shihanzhang
Public bug reported:

when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes, because when nova migrate VM at 
destination compute node, it just update port's binding:host,  the port's 
status is still active, from neutron perspective, the progress of port status 
is : active -> build -> active,
in bellow case, l2 population  will fail:
1. nova successfully live migrate vm A and VM B from compute A to compute B.
2. port A and port B status are active,  binding:host are compute B .
3. l2 agent scans these two port, then handle them one by one.
4. neutron-server firstly handle port A, its status will be build(remember port 
B status is still active), and do bellow check
in l2 population check,  this check will be fail

def _update_port_up(self, context):
..
  if agent_active_ports == 1 or (self.get_agent_uptime(agent) < 
cfg.CONF.l2pop.agent_boot_time):
  # First port activated on current agent in this network,
  # we have to provide it with the whole list of fdb entries

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes,
  because when nova migrate VM at destination compute node, it just update 
port's binding:host,  the port's status
- is still active, from neutron perspective, the progress of port status is : 
active -> build -> active,  
+ is still active, from neutron perspective, the progress of port status is : 
active -> build -> active,
  in bellow case, l2 population  will fail:
  1. nova successfully live migrate vm A and VM B from compute A to compute B.
  2. port A and port B status are active,  binding:host are compute B .
  3. l2 agent scans these two port, then handle them one by one.
- 4. neutron-server firstly handle port A, its status will be build(remember 
port B status is still active), and do bellow check 
+ 4. neutron-server firstly handle port A, its status will be build(remember 
port B status is still active), and do bellow check
  in l2 population check,  this check will be fail
  
- def _update_port_up(self, context):
- ..
- if agent_active_ports == 1 or (
- self.get_agent_uptime(agent) < 
cfg.CONF.l2pop.agent_boot_time):
-# First port activated on current agent in this network,
-# we have to provide it with the whole list of fdb entries
+ def _update_port_up(self, context):
+ ..
+   if agent_active_ports == 1 or (self.get_agent_uptime(agent) < 
cfg.CONF.l2pop.agent_boot_time):
+   # First port activated on current agent in this network,
+   # we have to provide it with the whole list of fdb entries

** Description changed:

- when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes,
- because when nova migrate VM at destination compute node, it just update 
port's binding:host,  the port's status
- is still active, from neutron perspective, the progress of port status is : 
active -> build -> active,
+ when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes, because when nova migrate VM at 
destination compute node, it just update port's binding:host,  the port's 
status is still active, from neutron perspective, the progress of port status 
is : active -> build -> active,
  in bellow case, l2 population  will fail:
  1. nova successfully live migrate vm A and VM B from compute A to compute B.
  2. port A and port B status are active,  binding:host are compute B .
  3. l2 agent scans these two port, then handle them one by one.
  4. neutron-server firstly handle port A, its status will be build(remember 
port B status is still active), and do bellow check
  in l2 population check,  this check will be fail
  
  def _update_port_up(self, context):
  ..
    if agent_active_ports == 1 or (self.get_agent_uptime(agent) < 
cfg.CONF.l2pop.agent_boot_time):
    # First port activated on current agent in this network,
    # we have to provide it with the whole list of fdb entries

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483601

Title:
  l2 population failed when bulk live migrate VMs

Status in neutron:
  New

Bug description:
  when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes, because when nova migrate VM at 
destination compute node, it just update port's binding:host,  the port's 
status is still active, from neutron perspective, the progress of port status 
is : active -> build -> active,
  in bellow case, l2 population  will fail:
  1. nova successfully live migrate vm A and VM B from compute A to compute B.
  2. port A and port B status are active,  binding:host are compute B .
  

[Yahoo-eng-team] [Bug 1481231] [NEW] ML2 plugin does not support query with marker 'network_type, physical_network, segmentation_id'

2015-08-04 Thread shihanzhang
Public bug reported:

Now the Ml2 plugin does not support query with marker 'network_type,
physical_network, segmentation_id', but sometimes user need query
networks with these attributes.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Description changed:

  Now the Ml2 plugin does not support query with marker 'network_type,
  physical_network, segmentation_id', but sometimes user need query
  networks with these attributes.
  
- def get_networks(self, context, filters=None, fields=None,
-  sorts=None, limit=None, marker=None, page_reverse=False):
- session = context.session
- with session.begin(subtransactions=True):
- nets = super(Ml2Plugin,
-  self).get_networks(context, filters, None, sorts,
- limit, marker, page_reverse)
- for net in nets:
- self.type_manager.extend_network_dict_provider(context, net)
+ def get_networks(self, context, filters=None, fields=None,
+  sorts=None, limit=None, marker=None, page_reverse=False):
+ session = context.session
+ with session.begin(subtransactions=True):
+ nets = super(Ml2Plugin,
+  self).get_networks(context, filters, None, sorts,
+ limit, marker, page_reverse)
+ for net in nets:
+ self.type_manager.extend_network_dict_provider(context, net)
  
- nets = self._filter_nets_provider(context, nets, filters)
+ nets = self._filter_nets_provider(context, nets, filters)
  
- return [self._fields(net, fields) for net in nets]
+ return [self._fields(net, fields) for net in nets]

** Description changed:

  Now the Ml2 plugin does not support query with marker 'network_type,
  physical_network, segmentation_id', but sometimes user need query
  networks with these attributes.
- 
- def get_networks(self, context, filters=None, fields=None,
-  sorts=None, limit=None, marker=None, page_reverse=False):
- session = context.session
- with session.begin(subtransactions=True):
- nets = super(Ml2Plugin,
-  self).get_networks(context, filters, None, sorts,
- limit, marker, page_reverse)
- for net in nets:
- self.type_manager.extend_network_dict_provider(context, net)
- 
- nets = self._filter_nets_provider(context, nets, filters)
- 
- return [self._fields(net, fields) for net in nets]

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481231

Title:
  ML2 plugin does not support query with marker 'network_type,
  physical_network, segmentation_id'

Status in neutron:
  New

Bug description:
  Now the Ml2 plugin does not support query with marker 'network_type,
  physical_network, segmentation_id', but sometimes user need query
  networks with these attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476469] [NEW] with DVR, a VM can't use floatingIP and VPN at the same time

2015-07-20 Thread shihanzhang
Public bug reported:

Now VPN Service is available for Distributed Routers by patch 
#https://review.openstack.org/#/c/143203/, 
but there is another problem,  with DVR, a VM can't use floatingIP and VPN at 
the same time.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476469

Title:
  with DVR, a VM can't use floatingIP and VPN at the same time

Status in neutron:
  New

Bug description:
  Now VPN Service is available for Distributed Routers by patch 
#https://review.openstack.org/#/c/143203/, 
  but there is another problem,  with DVR, a VM can't use floatingIP and VPN at 
the same time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476145] [NEW] the port for floating IP should not include IPv6 address

2015-07-20 Thread shihanzhang
Public bug reported:

Now if we create a floating IP,  neutron will create a internal port for this 
floating IP which is used purely for internal system and admin use when 
managing floating IPs, but if a external network with a IPv4 subnet and a IPv6 
subnet, then the port for floating IP will
has two IPs, one is IPv4, one is IPv6.
reproduce steps:
1. create a external  network
2. create a IPv4 subnet and a IPv6 subnet for this network
3. create a floatingIP without paramter '--floating-ip-address'

you will find the port for this floatingIP will has two IPs

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476145

Title:
  the port for floating IP should not include IPv6 address

Status in neutron:
  New

Bug description:
  Now if we create a floating IP,  neutron will create a internal port for this 
floating IP which is used purely for internal system and admin use when 
managing floating IPs, but if a external network with a IPv4 subnet and a IPv6 
subnet, then the port for floating IP will
  has two IPs, one is IPv4, one is IPv6.
  reproduce steps:
  1. create a external  network
  2. create a IPv4 subnet and a IPv6 subnet for this network
  3. create a floatingIP without paramter '--floating-ip-address'

  you will find the port for this floatingIP will has two IPs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473965] Re: the port of scecurity group rule for TCP or UDP should not be 0

2015-07-13 Thread shihanzhang
** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473965

Title:
  the port of scecurity group rule for TCP or UDP should not be 0

Status in neutron:
  New

Bug description:
  for TCP or UDP protocol, 0 is a reserved port, but for neutron
  security group rule, if a rule with TCP protocol, and its port-range-
  min is 0, the port-range-max will be invalid, because for port-range-
  min being 0 means that it allow all package pass, so I think it should
  not create a rule with port-range-min being 0, if user want to allow
  all TCP/UDP package pass, he can create a security group rule with
  port-range-min and port-range-max being None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473965] [NEW] the port of scecurity group rule for TCP or UDP should not be 0

2015-07-13 Thread shihanzhang
Public bug reported:

for TCP or UDP protocol, 0 is a reserved port, but for neutron security
group rule, if a rule with TCP protocol, and its port-range-min is 0,
the port-range-max will be invalid, because for port-range-min being 0
means that it allow all package pass, so I think it should not create a
rule with port-range-min being 0, if user want to allow all TCP/UDP
package pass, he can create a security group rule with port-range-min
and port-range-max being None.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473965

Title:
  the port of scecurity group rule for TCP or UDP should not be 0

Status in neutron:
  New

Bug description:
  for TCP or UDP protocol, 0 is a reserved port, but for neutron
  security group rule, if a rule with TCP protocol, and its port-range-
  min is 0, the port-range-max will be invalid, because for port-range-
  min being 0 means that it allow all package pass, so I think it should
  not create a rule with port-range-min being 0, if user want to allow
  all TCP/UDP package pass, he can create a security group rule with
  port-range-min and port-range-max being None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472452] [NEW] arp spoofing protection flow install failed

2015-07-07 Thread shihanzhang
Public bug reported:

Now ovs-agent failed to install arp spoofing protection flow for new VMs, 
because it will firstly install arp spoofing protection flow in funstion 
'treat_devices_added_or_updated':
def treat_devices_added_or_updated(self, devices, ovs_restarted):
.
.

if self.prevent_arp_spoofing:
   self.setup_arp_spoofing_protection(self.int_br, port, details)

but then in function '_bind_devices', it will clear all flows for this
new port, so the arp spoofing protection flow is also be clean

def _bind_devices(self, need_binding_ports):
.

if cur_tag != lvm.vlan:
self.int_br.set_db_attribute(
"Port", port.port_name, "tag", lvm.vlan)
if port.ofport != -1:
# NOTE(yamamoto): Remove possible drop_port flow
# installed by port_dead.
self.int_br.delete_flows(in_port=port.ofport)

** Affects: neutron
     Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
     Assignee: (unassigned) => shihanzhang (shihanzhang)

** Description changed:

  Now ovs-agent failed to install arp spoofing protection flow for new VMs, 
because it will firstly install arp spoofing protection flow in funstion 
'treat_devices_added_or_updated':
- def treat_devices_added_or_updated(self, devices, ovs_restarted):
- .
- .
- if 'port_id' in details:
- LOG.info(_LI("Port %(device)s updated. Details: %(details)s"),
-  {'device': device, 'details': details})
- need_binding = self.treat_vif_port(port, details['port_id'],
-details['network_id'],
-details['network_type'],
-
details['physical_network'],
-details['segmentation_id'],
-details['admin_state_up'],
-details['fixed_ips'],
-details['device_owner'],
-ovs_restarted)
- if self.prevent_arp_spoofing:
- self.setup_arp_spoofing_protection(self.int_br,
-port, details)
+ def treat_devices_added_or_updated(self, devices, ovs_restarted):
+ .
+ .
+ if 'port_id' in details:
+ if self.prevent_arp_spoofing:
+ self.setup_arp_spoofing_protection(self.int_br,
+
port, details)
  
  but then in function '_bind_devices', it will clear all flows for this
  new port, so the arp spoofing protection flow is also be clean
  
- def _bind_devices(self, need_binding_ports):
- .
- 
- if cur_tag != lvm.vlan:
- self.int_br.set_db_attribute(
- "Port", port.port_name, "tag", lvm.vlan)
- if port.ofport != -1:
- # NOTE(yamamoto): Remove possible drop_port flow
- # installed by port_dead.
- self.int_br.delete_flows(in_port=port.ofport)
+ def _bind_devices(self, need_binding_ports):
+ .
+ 
+ if cur_tag != lvm.vlan:
+ self.int_br.set_db_attribute(
+ "Port", port.port_name, "tag", lvm.vlan)
+ if port.ofport != -1:
+ # NOTE(yamamoto): Remove possible drop_port flow
+ # installed by port_dead.
+ self.int_br.delete_flows(in_port=port.ofport)

** Description changed:

  Now ovs-agent failed to install arp spoofing protection flow for new VMs, 
because it will firstly install arp spoofing protection flow in funstion 
'treat_devices_added_or_updated':
  def treat_devices_added_or_updated(self, devices, ovs_restarted):
  .
  .
- if 'port_id' in details:
- if self.prevent_arp_spoofing:
- self.setup_arp_spoofing_protection(self.int_br,
-
port, details)
+ 
+ if self.prevent_arp_spoofing:
+    se

[Yahoo-eng-team] [Bug 1470441] Re: a port has no security group but its 'port_security_enabled' is True

2015-07-01 Thread shihanzhang
hi kevin, I forgot that, it has spoofing rules, so this is not a bug, so
I will change it to invalid!

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470441

Title:
  a port has no security group but its 'port_security_enabled' is True

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  with ml2 extension_drivers port_security, if a port's attribute
  port_security_enabled is True, this port must belongs to a
  security_group, but in bellow case, a port 'port_security_enabled' is
  True, and its 'security_groups' is empty,

  reproduce steps:
  1. create a port with  'port_security_enabled' Fasle
  2. update this port 'port_security_enabled ' to True

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470441] [NEW] a port has no security group but its 'port_security_enabled' is True

2015-07-01 Thread shihanzhang
Public bug reported:

with ml2 extension_drivers port_security, if a port's attribute
port_security_enabled is True, this port must belongs to a
security_group, but in bellow case, a port 'port_security_enabled' is
True, and its 'security_groups' is empty,

reproduce steps:
1. create a port with  'port_security_enabled' Fasle
2. update this port 'port_security_enabled ' to True

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470441

Title:
  a port has no security group but its 'port_security_enabled' is True

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  with ml2 extension_drivers port_security, if a port's attribute
  port_security_enabled is True, this port must belongs to a
  security_group, but in bellow case, a port 'port_security_enabled' is
  True, and its 'security_groups' is empty,

  reproduce steps:
  1. create a port with  'port_security_enabled' Fasle
  2. update this port 'port_security_enabled ' to True

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469615] [NEW] dhcp service is unavailable if we delete dhcp port

2015-06-29 Thread shihanzhang
Public bug reported:

if we delete the dhcp port,  the dhcp service for corresponding network
is unavailable, because dhcp port is deleted from neutron-server, but
the TAP device on network node is not deleted, and the tag for this TAP
is dead vlan 4095,  and the dhcp service can' t be recoverd.

reproduce steps:
1. create network, subnet
2. delete the dhcp port in this network

I foud the TAP device on network node was not deleted, but its tag is
4095

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469615

Title:
  dhcp service is unavailable if we delete dhcp port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  if we delete the dhcp port,  the dhcp service for corresponding
  network is unavailable, because dhcp port is deleted from neutron-
  server, but the TAP device on network node is not deleted, and the tag
  for this TAP is dead vlan 4095,  and the dhcp service can' t be
  recoverd.

  reproduce steps:
  1. create network, subnet
  2. delete the dhcp port in this network

  I foud the TAP device on network node was not deleted, but its tag is
  4095

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468236] [NEW] enable neutron support distributed DHCP agents

2015-06-24 Thread shihanzhang
Public bug reported:

Now in the large-scale scenarios, neutron dhcp-agent can't work well, it is 
better to enable neutron support distributed DHCP agents across compute nodes, 
it will offer better scalability and limit the failure domain of the IPAM 
service.
there is already a registered BP 
https://blueprints.launchpad.net/neutron/+spec/distributed-dhcp, the auther 
Mike Kolesnik has agreed with me to do it, so I would like do it followed by 
RFE in Liberty cycle.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468236

Title:
  enable neutron support distributed DHCP agents

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now in the large-scale scenarios, neutron dhcp-agent can't work well, it is 
better to enable neutron support distributed DHCP agents across compute nodes, 
it will offer better scalability and limit the failure domain of the IPAM 
service.
  there is already a registered BP 
https://blueprints.launchpad.net/neutron/+spec/distributed-dhcp, the auther 
Mike Kolesnik has agreed with me to do it, so I would like do it followed by 
RFE in Liberty cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467728] [NEW] Do not check neutron port quota in API layer

2015-06-22 Thread shihanzhang
Public bug reported:

Now  neutron API does not provide reservation mechanism, so if a tenant has a 
large number of ports, in function validate_networks, 
'list_ports' will be very expensive, and port creation depends in some cases on 
mac addresses only available on the compute manager, so I think it is better to 
remove this check in function validate_networks:

def validate_networks(self, context, requested_networks, num_instances):
...
neutron = get_client(context)
ports_needed_per_instance = self._ports_needed_per_instance(
context, neutron, requested_networks)
if ports_needed_per_instance:
ports = neutron.list_ports(tenant_id=context.project_id)['ports']
quotas = neutron.show_quota(tenant_id=context.project_id)['quota']
if quotas.get('port', -1) == -1:
# Unlimited Port Quota
return num_instances

** Affects: nova
     Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
     Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467728

Title:
  Do not check neutron port quota in API layer

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now  neutron API does not provide reservation mechanism, so if a tenant has a 
large number of ports, in function validate_networks, 
  'list_ports' will be very expensive, and port creation depends in some cases 
on mac addresses only available on the compute manager, so I think it is better 
to remove this check in function validate_networks:

  def validate_networks(self, context, requested_networks, num_instances):
  ...
  neutron = get_client(context)
  ports_needed_per_instance = self._ports_needed_per_instance(
  context, neutron, requested_networks)
  if ports_needed_per_instance:
  ports = neutron.list_ports(tenant_id=context.project_id)['ports']
  quotas = neutron.show_quota(tenant_id=context.project_id)['quota']
  if quotas.get('port', -1) == -1:
  # Unlimited Port Quota
  return num_instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464527] [NEW] VM can't communicate with others in DVR

2015-06-12 Thread shihanzhang
Public bug reported:

In one openstack with multiple neutron-servers behind a haproxy, in bellow use 
case, VM can't communicate with others in DVR,
reproduce steps:
1. create a subnet and add this subnet to a DVR
2. bulk create two VMs with a specail compute node(this node does not include 
any VMs in this subnet) in this subnet
3. when a VM port status is ACTIVE, but another VM port status is BUILD, delete 
the VM which port status is ACTIVE
then I can't find the namespace of this DVR router, the reason is that when it 
'delete_port', it will check the the all the ports status in this host and 
subnet using 'check_ports_active_on_host_and_subnet', but it check port ACTIVE 
status, sometimes a VM's port status will be BUILD

 def check_ports_active_on_host_and_subnet(self, context, host,port_id, 
subnet_id):
 """Check if there is any dvr serviceable port on the subnet_id."""
 filter_sub = {'fixed_ips': {'subnet_id': [subnet_id]}}
 ports = self._core_plugin.get_ports(context, filters=filter_sub)
 for port in ports:
 if (n_utils.is_dvr_serviced(port['device_owner'])
 and port['status'] == 'ACTIVE'
 and port['binding:host_id'] == host
 and port['id'] != port_id):
 LOG.debug('DVR: Active port exists for subnet %(subnet_id)s '
   'on host %(host)s', {'subnet_id': subnet_id,
        'host': host})
 return True
 return False

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Description changed:

  In one openstack with multiple neutron-servers behind a haproxy, in bellow 
use case, VM can't communicate with others in DVR,
  reproduce steps:
  1. create a subnet and add this subnet to a DVR
  2. bulk create two VMs with a specail compute node(this node does not include 
any VMs in this subnet) in this subnet
  3. when a VM port status is ACTIVE, but another VM port status is BUILD, 
delete the VM which port status is ACTIVE
  then I can't find the namespace of this DVR router, the reason is that when 
it 'delete_port', it will check the the all the ports status in this host and 
subnet using 'check_ports_active_on_host_and_subnet', but it check port ACTIVE 
status, sometimes a VM's port status will be BUILD
  
- def check_ports_active_on_host_and_subnet(self, context, host,
-  port_id, subnet_id):
- """Check if there is any dvr serviceable port on the subnet_id."""
- filter_sub = {'fixed_ips': {'subnet_id': [subnet_id]}}
- ports = self._core_plugin.get_ports(context, filters=filter_sub)
- for port in ports:
- if (n_utils.is_dvr_serviced(port['device_owner'])
- and port['status'] == 'ACTIVE'
- and port['binding:host_id'] == host
- and port['id'] != port_id):
- LOG.debug('DVR: Active port exists for subnet %(subnet_id)s '
-   'on host %(host)s', {'subnet_id': subnet_id,
-'host': host})
- return True
- return False
+ def check_ports_active_on_host_and_subnet(self, context, host,port_id, 
subnet_id):
+ """Check if there is any dvr serviceable port on the subnet_id."""
+ filter_sub = {'fixed_ips': {'subnet_id': [subnet_id]}}
+ ports = self._core_plugin.get_ports(context, filters=filter_sub)
+ for port in ports:
+ if (n_utils.is_dvr_serviced(port['device_owner'])
+ and port['status'] == 'ACTIVE'
+ and port['binding:host_id'] == host
+ and port['id'] != port_id):
+ LOG.debug('DVR: Active port exists for subnet %(subnet_id)s '
+   'on host %(host)s', {'subnet_id': subnet_id,
+    'host': host})
+ return True
+ return False

** Description changed:

  In one openstack with multiple neutron-servers behind a haproxy, in bellow 
use case, VM can't communicate with others in DVR,
  reproduce steps:
  1. create a subnet and add this subnet to a DVR
  2. bulk create two VMs with a specail compute node(this node does not include 
any VMs in this subnet) in this subnet
  3. when a VM port status is ACTIVE, but another VM port status is BUILD, 
delete the VM which port status is ACTIVE
  then I can't find 

[Yahoo-eng-team] [Bug 1464116] [NEW] 'network_id' and 'cidr' should be unique int table 'Subnet'

2015-06-10 Thread shihanzhang
Public bug reported:

'network_id' and 'cidr' should be unique int table 'Subnet', so unique
constraints should be added!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464116

Title:
  'network_id' and 'cidr' should be unique int table 'Subnet'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  'network_id' and 'cidr' should be unique int table 'Subnet', so unique
  constraints should be added!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463375] [NEW] Use fanout RPC message to nofity the security group's change

2015-06-09 Thread shihanzhang
Public bug reported:

when a security group members or rules change, if it just notify the l2 agents 
with 'security_groups_member_updated' or 'security_groups_rule_updated', the 
all related l2 agents need to get the security group details through RPC from
neutron-server, when the number of l2 agents is large, the load of 
neutron-server is heavy.
we can use fanout RPC message with the changed sg details to notify the l2 
agents, then l2 agents which has the related devices update the sg information 
in their memory, they do not need to get the sg details through RPC.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463375

Title:
  Use fanout RPC message to nofity the security group's change

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when a security group members or rules change, if it just notify the l2 
agents with 'security_groups_member_updated' or 'security_groups_rule_updated', 
the all related l2 agents need to get the security group details through RPC 
from
  neutron-server, when the number of l2 agents is large, the load of 
neutron-server is heavy.
  we can use fanout RPC message with the changed sg details to notify the l2 
agents, then l2 agents which has the related devices update the sg information 
in their memory, they do not need to get the sg details through RPC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463331] [NEW] ipset set can't be destroyed if related security group member is empty

2015-06-09 Thread shihanzhang
Public bug reported:

if a security group A has a rule that allow security group B access, the
member of  security group B is empty, then I delete this rule which
allow security group B access, I find that the ipset set in compute node
does not be destroyed.

reproduce steps:
1. create security group A and B
2. create a rule for A that allow security group B access
3. create a VM in create security group A
4. delete this rule which allow security group B access

I find the ipset set in compute node does not be destroyed.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463331

Title:
  ipset set can't be destroyed if  related security group member is
  empty

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  if a security group A has a rule that allow security group B access,
  the member of  security group B is empty, then I delete this rule
  which  allow security group B access, I find that the ipset set in
  compute node does not be destroyed.

  reproduce steps:
  1. create security group A and B
  2. create a rule for A that allow security group B access
  3. create a VM in create security group A
  4. delete this rule which allow security group B access

  I find the ipset set in compute node does not be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460562] [NEW] ipset can't be destroyed when last sg rule is deleted

2015-06-01 Thread shihanzhang
Public bug reported:

reproduce steps:
1. a VM A in default security group
2. default security group has rules: 1. allow all traffic out; 2. allow it self 
as remote_group in
3. firstly delete rule 1, then delete rule2

I found the iptables in compute node which VM A resids didn't be reload,
and the relevant ipset didn't be destroyed.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New


** Tags: ipset

** Tags added: ipset

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460562

Title:
  ipset can't be destroyed when last sg rule is deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  reproduce steps:
  1. a VM A in default security group
  2. default security group has rules: 1. allow all traffic out; 2. allow it 
self as remote_group in
  3. firstly delete rule 1, then delete rule2

  I found the iptables in compute node which VM A resids didn't be
  reload, and the relevant ipset didn't be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458786] [NEW] Update port security group, relevant ipset member can't be updated

2015-05-26 Thread shihanzhang
Public bug reported:

reproduce step:

1.  VM1 in security group A
2.  VM2 in security group B
3.  security group B can access security group A
4.  update VM1 to security group C

I found that VM1 ip address was still in ipset members which belongs to
security group A, but now VM1 was already in security group C

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458786

Title:
  Update port security group, relevant ipset member can't be updated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  reproduce step:

  1.  VM1 in security group A
  2.  VM2 in security group B
  3.  security group B can access security group A
  4.  update VM1 to security group C

  I found that VM1 ip address was still in ipset members which belongs
  to security group A, but now VM1 was already in security group C

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458709] [NEW] ovs agent can't startup in some abnormal condition

2015-05-25 Thread shihanzhang
Public bug reported:

when ovs agent restart, it will restore the local vlan map in function
'_restore_local_vlan_map',

def _restore_local_vlan_map(self):
cur_ports = self.int_br.get_vif_ports()
for port in cur_ports:
local_vlan_map = self.int_br.db_get_val("Port", port.port_name,
"other_config")
local_vlan = self.int_br.db_get_val("Port", port.port_name, "tag")
net_uuid = local_vlan_map.get('net_uuid')
if (net_uuid and net_uuid not in self.local_vlan_map
and local_vlan != DEAD_VLAN_TAG):
self.provision_local_vlan(local_vlan_map['net_uuid'],
 local_vlan_map['network_type'],
   local_vlan_map['physical_network'],
   local_vlan_map['segmentation_id'],
   local_vlan)

in some abnormal condition, if a device does not be set tag in ovsdb,
the 'self.int_br.db_get_val("Port", port.port_name, "tag")' will return
a empty list, so in function 'provision_local_vlan' will raise
exception:

def provision_local_vlan(self, net_uuid, network_type, physical_network,
 segmentation_id,local_vlan=None:
lvm = self.local_vlan_map.get(net_uuid)
if lvm:
lvid = lvm.vlan
else:
if local_vlan in self.available_local_vlans:
lvid = local_vlan
self.available_local_vlans.remove(local_vlan)

this line will raise exception 'if local_vlan in
self.available_local_vlans'

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Description changed:

  when ovs agent restart, it will restore the local vlan map in function
  '_restore_local_vlan_map',
  
- def _restore_local_vlan_map(self):
- cur_ports = self.int_br.get_vif_ports()
- for port in cur_ports:
- local_vlan_map = self.int_br.db_get_val("Port", port.port_name,
- "other_config")
- local_vlan = self.int_br.db_get_val("Port", port.port_name, "tag")
- net_uuid = local_vlan_map.get('net_uuid')
- if (net_uuid and net_uuid not in self.local_vlan_map
- and local_vlan != DEAD_VLAN_TAG):
- self.provision_local_vlan(local_vlan_map['net_uuid'],
-   local_vlan_map['network_type'],
-   local_vlan_map['physical_network'],
-   local_vlan_map['segmentation_id'],
-   local_vlan)
- in some abnormal condition, if a device does not be set tag in ovsdb, the 
'self.int_br.db_get_val("Port", port.port_name, "tag")' will return a empty 
list, so in function 'provision_local_vlan' will raise exception:
+ def _restore_local_vlan_map(self):
+ cur_ports = self.int_br.get_vif_ports()
+ for port in cur_ports:
+ local_vlan_map = self.int_br.db_get_val("Port", port.port_name,
+ "other_config")
+ local_vlan = self.int_br.db_get_val("Port", port.port_name, "tag")
+ net_uuid = local_vlan_map.get('net_uuid')
+ if (net_uuid and net_uuid not in self.local_vlan_map
+ and local_vlan != DEAD_VLAN_TAG):
+ self.provision_local_vlan(local_vlan_map['net_uuid'],
+  local_vlan_map['network_type'],
+    local_vlan_map['physical_network'],
+    local_vlan_map['segmentation_id'],
+    local_vlan)
  
- def provision_local_vlan(self, net_uuid, network_type, physical_network,
-  segmentation_id, local_vlan=None):
- lvm = self.local_vlan_map.get(net_uuid)
- if lvm:
- lvid = lvm.vlan
- else:
- if local_vlan in self.available_local_vlans:
- lvid = local_vlan
- self.available_local_vlans.remove(local_vlan)
- this line will raise exception 'if local_vlan in self.available_local_vlans'
+ in some abnormal condition, if a device does not be set tag in ovsdb,
+ the 'self.int_br.db_get_val("Port", port.port_name, "tag")' will return
+ a empty list, so in function 'provision_local_vlan' will raise
+ exception:
+ 
+ def provision_local

[Yahoo-eng-team] [Bug 1453715] [NEW] ml2 plugin can't update port 'binding:host_id' be None

2015-05-11 Thread shihanzhang
Public bug reported:

Now with neutron ml2 plugin, if we want to update port 'binding:host_id'
be None, we must set 'binding:host_id' be empty
string(binding:host_id=''),  there is a problem when nova delete a VMs:
https://bugs.launchpad.net/nova/+bug/1441419

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453715

Title:
  ml2 plugin can't update port 'binding:host_id' be None

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now with neutron ml2 plugin, if we want to update port
  'binding:host_id' be None, we must set 'binding:host_id' be empty
  string(binding:host_id=''),  there is a problem when nova delete a
  VMs: https://bugs.launchpad.net/nova/+bug/1441419

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452718] [NEW] Create sg rule or delete sg rule, iptalbes can't be reload

2015-05-07 Thread shihanzhang
Public bug reported:

when we  create a new sg rule or delete a sg rule, the iptables can't be reload 
in compute node, this bug is introduced by patch: 
https://review.openstack.org/118274
I have found the reason, I will fix it tomorrow morning!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452718

Title:
  Create sg rule or delete sg rule,  iptalbes can't be reload

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when we  create a new sg rule or delete a sg rule, the iptables can't be 
reload in compute node, this bug is introduced by patch: 
https://review.openstack.org/118274
  I have found the reason, I will fix it tomorrow morning!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448022] [NEW] update port IP, ipset member can't be updated in another host

2015-04-24 Thread shihanzhang
Public bug reported:

reproduce:
1. vm A in compute A with ip:192.168.83.2, vm B in compute B with 
ip:192.168.83.3
2. update vm A port ip to 192.168.83.4
3. the ipset in compute B can't be updated

the reason is that this path:https://review.openstack.org/58415 delete
the 'notify_security_groups_member_updated' in method
'is_security_group_member_updated'

def is_security_group_member_updated(self, context,
 original_port, updated_port):
"""Check security group member updated or not.

This method returns a flag which indicates request notification
is required and does not perform notification itself.
It is because another changes for the port may require notification.
"""
need_notify = False
if (original_port['fixed_ips'] != updated_port['fixed_ips'] or
original_port['mac_address'] != updated_port['mac_address'] or
not utils.compare_elements(
original_port.get(ext_sg.SECURITYGROUPS),
updated_port.get(ext_sg.SECURITYGROUPS))):
need_notify = True
    return need_notify

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448022

Title:
  update port IP, ipset member can't be updated in another host

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  reproduce:
  1. vm A in compute A with ip:192.168.83.2, vm B in compute B with 
ip:192.168.83.3
  2. update vm A port ip to 192.168.83.4
  3. the ipset in compute B can't be updated

  the reason is that this path:https://review.openstack.org/58415 delete
  the 'notify_security_groups_member_updated' in method
  'is_security_group_member_updated'

  def is_security_group_member_updated(self, context,
   original_port, updated_port):
  """Check security group member updated or not.

  This method returns a flag which indicates request notification
  is required and does not perform notification itself.
  It is because another changes for the port may require notification.
  """
  need_notify = False
  if (original_port['fixed_ips'] != updated_port['fixed_ips'] or
  original_port['mac_address'] != updated_port['mac_address'] or
  not utils.compare_elements(
  original_port.get(ext_sg.SECURITYGROUPS),
  updated_port.get(ext_sg.SECURITYGROUPS))):
  need_notify = True
  return need_notify

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1448022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441419] [NEW] port 'binding:host_id' can't be removed when VM is deleted

2015-04-07 Thread shihanzhang
Public bug reported:

reproduce this problem:
1. create a neutron port
2. use this port to boot a VM
3. delete this VM
4. we can see port still exist, but the 'binding:host_id' can't be removed

the reason is that in _unbind_ports, when it update the port, it set
'port_req_body['port']['binding:host_id'] = None', but for neutron, when
update the port, if the attribute is None, it will not change

def _unbind_ports(self, context, ports,
  neutron, port_client=None):

port_binding = self._has_port_binding_extension(context,
refresh_cache=True, neutron=neutron)
if port_client is None:
# Requires admin creds to set port bindings
port_client = (neutron if not port_binding else
   get_client(context, admin=True))
for port_id in ports:
port_req_body = {'port': {'device_id': '', 'device_owner': ''}}
if port_binding:
port_req_body['port']['binding:host_id'] = None

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441419

Title:
  port 'binding:host_id' can't be removed when VM is deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  reproduce this problem:
  1. create a neutron port
  2. use this port to boot a VM
  3. delete this VM
  4. we can see port still exist, but the 'binding:host_id' can't be removed

  the reason is that in _unbind_ports, when it update the port, it set
  'port_req_body['port']['binding:host_id'] = None', but for neutron,
  when update the port, if the attribute is None, it will not change

  def _unbind_ports(self, context, ports,
neutron, port_client=None):

  port_binding = self._has_port_binding_extension(context,
  refresh_cache=True, neutron=neutron)
  if port_client is None:
  # Requires admin creds to set port bindings
  port_client = (neutron if not port_binding else
 get_client(context, admin=True))
  for port_id in ports:
  port_req_body = {'port': {'device_id': '', 'device_owner': ''}}
  if port_binding:
  port_req_body['port']['binding:host_id'] = None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438040] [NEW] fdb entries can't be removed when a VM is migrated

2015-03-30 Thread shihanzhang
Public bug reported:

this problem can be reprodeced as bellow:
1. vm A in computeA, vm B in computeB, l2 pop enable;
2. vmB continue ping vmA 
3. live migrate vmA to computeB 
4. when live-migrate finish, vmB ping vmA failed

the reason is bellow, in l2pop driver, when vmA migrate to computeB, port 
status change form BUILD to ACTIVE,
it add the port to  self.migrated_ports when port status is ACTIVE, but 
'remove_fdb_entries' in port status is BUILD :
def update_port_postcommit(self, context):
...
...
elif (context.host != context.original_host
and context.status == const.PORT_STATUS_ACTIVE
and not self.migrated_ports.get(orig['id'])):
# The port has been migrated. We have to store the original
# binding to send appropriate fdb once the port will be set
# on the destination host
self.migrated_ports[orig['id']] = (
(orig, context.original_host))
elif context.status != context.original_status:
if context.status == const.PORT_STATUS_ACTIVE:
self._update_port_up(context)
elif context.status == const.PORT_STATUS_DOWN:
fdb_entries = self._update_port_down(
context, port, context.host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
elif context.status == const.PORT_STATUS_BUILD:
orig = self.migrated_ports.pop(port['id'], None)
if orig:
original_port = orig[0]
original_host = orig[1]
# this port has been migrated: remove its entries from fdb
fdb_entries = self._update_port_down(
context, original_port, original_host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)

** Affects: neutron
     Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
     Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438040

Title:
  fdb entries can't be removed when a VM is migrated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  this problem can be reprodeced as bellow:
  1. vm A in computeA, vm B in computeB, l2 pop enable;
  2. vmB continue ping vmA 
  3. live migrate vmA to computeB 
  4. when live-migrate finish, vmB ping vmA failed

  the reason is bellow, in l2pop driver, when vmA migrate to computeB, port 
status change form BUILD to ACTIVE,
  it add the port to  self.migrated_ports when port status is ACTIVE, but 
'remove_fdb_entries' in port status is BUILD :
  def update_port_postcommit(self, context):
  ...
  ...
  elif (context.host != context.original_host
  and context.status == const.PORT_STATUS_ACTIVE
  and not self.migrated_ports.get(orig['id'])):
  # The port has been migrated. We have to store the original
  # binding to send appropriate fdb once the port will be set
  # on the destination host
  self.migrated_ports[orig['id']] = (
  (orig, context.original_host))
  elif context.status != context.original_status:
  if context.status == const.PORT_STATUS_ACTIVE:
  self._update_port_up(context)
  elif context.status == const.PORT_STATUS_DOWN:
  fdb_entries = self._update_port_down(
  context, port, context.host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)
  elif context.status == const.PORT_STATUS_BUILD:
  orig = self.migrated_ports.pop(port['id'], None)
  if orig:
  original_port = orig[0]
  original_host = orig[1]
  # this port has been migrated: remove its entries from fdb
  fdb_entries = self._update_port_down(
  context, original_port, original_host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437140] [NEW] dhcp agent should reduce 'reload_allocations' times

2015-03-26 Thread shihanzhang
Public bug reported:

Now dhcp agent receive message of 'port_update_end', 'port_create_end'
and 'port_delete_end', it will call driver 'reload_allocations' method
evrytime, I think it does not 'reload_allocations' evrytime, for
example, bulk create two ports in one network, it just need reload once.

** Affects: neutron
     Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437140

Title:
  dhcp agent should reduce 'reload_allocations' times

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Now dhcp agent receive message of 'port_update_end', 'port_create_end'
  and 'port_delete_end', it will call driver 'reload_allocations' method
  evrytime, I think it does not 'reload_allocations' evrytime, for
  example, bulk create two ports in one network, it just need reload
  once.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435655] [NEW] Can't manually assign a distributed router to a l3 agent

2015-03-23 Thread shihanzhang
Public bug reported:

Now neutron does not  allow to manually assign a distributed router to a l3 
agent which is in 'dvr' mode, but in bellow use case, it does not work ok:
1 case:
(1)there are two computeA, B nodes which l3 agent are in legacy mode, l2 agent 
'enable_distributed_routing = False'
(2)create a 'dvr' router, then add subnetA to this 'dvr' router
(3)create VMs with subnetA  in computeA or B
(4)modify  'agent_mode=dvr',  'enable_distributed_routing = False' in computeA 
(5)the VMs in  computeA  can't communicate with their gateway

2 case:
(1)there is a computeA,  it's 'agent_mode=dvr',  'enable_distributed_routing = 
True'
(2)create a 'dvr' router, then add subnetA to this 'dvr' router
(3)create VMs with subnetA  in computeA
(4)use 'l3-agent-router-remove' remove l3 agent which in computeA from 'dvr' 
router
(5)the VMs in computeA  can't communicate with their gateway, and can't 
manually assign it's l3 agent to dvr router

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435655

Title:
  Can't manually assign a distributed router to a l3 agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now neutron does not  allow to manually assign a distributed router to a l3 
agent which is in 'dvr' mode, but in bellow use case, it does not work ok:
  1 case:
  (1)there are two computeA, B nodes which l3 agent are in legacy mode, l2 
agent 'enable_distributed_routing = False'
  (2)create a 'dvr' router, then add subnetA to this 'dvr' router
  (3)create VMs with subnetA  in computeA or B
  (4)modify  'agent_mode=dvr',  'enable_distributed_routing = False' in 
computeA 
  (5)the VMs in  computeA  can't communicate with their gateway

  2 case:
  (1)there is a computeA,  it's 'agent_mode=dvr',  'enable_distributed_routing 
= True'
  (2)create a 'dvr' router, then add subnetA to this 'dvr' router
  (3)create VMs with subnetA  in computeA
  (4)use 'l3-agent-router-remove' remove l3 agent which in computeA from 'dvr' 
router
  (5)the VMs in computeA  can't communicate with their gateway, and can't 
manually assign it's l3 agent to dvr router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431746] [NEW] AggregateCoreFilter return incorrect value

2015-03-13 Thread shihanzhang
Public bug reported:

I find AggregateCoreFilter will return incorrect value, the analysis is
bellow:

class AggregateCoreFilter(BaseCoreFilter):
def _get_cpu_allocation_ratio(self, host_state, filter_properties):
# TODO(uni): DB query in filter is a performance hit, especially for
# system with lots of hosts. Will need a general solution here to fix
# all filters with aggregate DB call things.
aggregate_vals = utils.aggregate_values_from_key(
host_state,
'cpu_allocation_ratio')
try:
ratio = utils.validate_num_values(
aggregate_vals, CONF.cpu_allocation_ratio, cast_to=float)
except ValueError as e:
LOG.warning(_LW("Could not decode cpu_allocation_ratio: '%s'"), e)
ratio = CONF.cpu_allocation_ratio

in function validate_num_values, it use min() to get the minimum ratio, but for 
aggregate, its 'cpu_allocation_ratio' is a string,
for example: vals=set('10', '9'), the 'validate_num_values' will return 10, but 
correct is 9

def validate_num_values(vals, default=None, cast_to=int, based_on=min):
num_values = len(vals)
if num_values == 0:
return default

if num_values > 1:
LOG.info(_LI("%(num_values)d values found, "
 "of which the minimum value will be used."),
 {'num_values': num_values})

    return cast_to(based_on(vals))

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431746

Title:
  AggregateCoreFilter return incorrect value

Status in OpenStack Compute (Nova):
  New

Bug description:
  I find AggregateCoreFilter will return incorrect value, the analysis
  is bellow:

  class AggregateCoreFilter(BaseCoreFilter):
  def _get_cpu_allocation_ratio(self, host_state, filter_properties):
  # TODO(uni): DB query in filter is a performance hit, especially for
  # system with lots of hosts. Will need a general solution here to fix
  # all filters with aggregate DB call things.
  aggregate_vals = utils.aggregate_values_from_key(
  host_state,
  'cpu_allocation_ratio')
  try:
  ratio = utils.validate_num_values(
  aggregate_vals, CONF.cpu_allocation_ratio, cast_to=float)
  except ValueError as e:
  LOG.warning(_LW("Could not decode cpu_allocation_ratio: '%s'"), e)
  ratio = CONF.cpu_allocation_ratio

  in function validate_num_values, it use min() to get the minimum ratio, but 
for aggregate, its 'cpu_allocation_ratio' is a string,
  for example: vals=set('10', '9'), the 'validate_num_values' will return 10, 
but correct is 9

  def validate_num_values(vals, default=None, cast_to=int, based_on=min):
  num_values = len(vals)
  if num_values == 0:
  return default

  if num_values > 1:
  LOG.info(_LI("%(num_values)d values found, "
   "of which the minimum value will be used."),
   {'num_values': num_values})

  return cast_to(based_on(vals))

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428429] [NEW] 'nova show vm-id' return wrong mac

2015-03-04 Thread shihanzhang
Public bug reported:

when a VM has two interface, the return of 'nova show' include wrong mac address
 
root@devstack:~# nova list
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| cf054891-b487-4721-bfb3-43b0274852b5 | ls_vm | ACTIVE | -  | Running  
   | private=10.0.0.4, 10.0.0.5 |
+--+---+++-++

nova --debug show ls_vm, the return message is bellow, we can the the two nic 
with same mac address
DEBUG (session:223) RESP: [200] date: Sun, 01 Mar 2015 00:57:25 GMT 
content-length: 1811 content-type: application/json x-compute-request-id: 
req-b68c0e86-ba27-440c-9db9-a95eba3d1441 
RESP BODY: {"server": {"status": "ACTIVE", "updated": "2015-03-01T00:49:59Z", 
"hostId": "dfd39f6797dde4a9ae5da5e63375c0fe8e2779174fec58f651c8e6d2", 
"OS-EXT-SRV-ATTR:host": "devstack", "addresses": {"private": 
[{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:78:ba:94", "version": 4, "addr": 
"10.0.0.4", "OS-EXT-IPS:type": "fixed"}, {"OS-EXT-IPS-MAC:mac_addr": 
"fa:16:3e:78:ba:94", "version": 4, "addr": "10.0.0.5", "OS-EXT-IPS:type": 
"fixed"}]}, "links": [{"href": 
"http://10.250.10.246:8774/v2/e188670ecafd46a18fff18c388a03417/servers/cf054891-b487-4721-bfb3-43b0274852b5";,
 "rel": "self"}, {"href": 
"http://10.250.10.246:8774/e188670ecafd46a18fff18c388a03417/servers/cf054891-b487-4721-bfb3-43b0274852b5";,
 "rel": "bookmark"}], "key_name": null, "image": {"id": 
"ae73b834-6958-4b45-a5e6-f992e823bda9", "links": [{"href": 
"http://10.250.10.246:8774/e188670ecafd46a18fff18c388a03417/images/ae73b834-6958-4b45-a5e6-f992e823bda9";,
 "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": 
"acti
 ve", "OS-EXT-SRV-ATTR:instance_name": "instance-0004", 
"OS-SRV-USG:launched_at": "2015-03-01T00:49:59.00", 
"OS-EXT-SRV-ATTR:hypervisor_hostname": "devstack", "flavor": {"id": "11", 
"links": [{"href": 
"http://10.250.10.246:8774/e188670ecafd46a18fff18c388a03417/flavors/11";, "rel": 
"bookmark"}]}, "id": "cf054891-b487-4721-bfb3-43b0274852b5", "security_groups": 
[{"name": "default"}, {"name": "default"}], "OS-SRV-USG:terminated_at": null, 
"OS-EXT-AZ:availability_zone": "nova", "user_id": 
"4a02104e5fbf4c7281569381835e8d4a", "name": "ls_vm", "created": 
"2015-03-01T00:49:51Z", "tenant_id": "e188670ecafd46a18fff18c388a03417", 
"OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached": [], 
"accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, 
"config_drive": "", "metadata": {}}}

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428429

Title:
  'nova show vm-id' return wrong mac

Status in OpenStack Compute (Nova):
  New

Bug description:
  when a VM has two interface, the return of 'nova show' include wrong mac 
address
   
  root@devstack:~# nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | cf054891-b487-4721-bfb3-43b0274852b5 | ls_vm | ACTIVE | -  | 
Running | private=10.0.0.4, 10.0.0.5 |
  
+--+---+++-++

  nova --debug show ls_vm, the return mes

[Yahoo-eng-team] [Bug 1425887] [NEW] Setting 'enable_snat' be false does not work in DVR

2015-02-26 Thread shihanzhang
Public bug reported:

I create a DVR with 'enable_snat' false, but the snat namespace also is
create on 'dvr_snat' node:

root@shz-vpn02:/var/log/neutron# neutron router-list
+--+--+--+-+---+
| id   | name | external_gateway_info   

 | 
distributed | ha|
+--+--+--+-+---+
| 2a3b6825-0bff-46d9-aea9-535176e78387 | dvr  | {"network_id": 
"dbed9af5-528b-4aec-b22f-d0ad8c346e02", "enable_snat": false, 
"external_fixed_ips": [{"subnet_id": "63705be9-d3db-4159-9e49-fd7e35b9c893", 
"ip_address": "172.24.4.99"}]} | True| False |

in 'dvr_snat' node, the snat-xxx is created, but the snat rule does not
add, so I think the snat namespace does not be created:

root@shz-vpn01:/var/log/neutron# ip netns list
snat-2a3b6825-0bff-46d9-aea9-535176e78387
qrouter-2a3b6825-0bff-46d9-aea9-535176e78387

root@shz-vpn01:/var/log/neutron# ip netns exec 
qrouter-2a3b6825-0bff-46d9-aea9-535176e78387 iptables-save -t nat
# Generated by iptables-save v1.4.21 on Thu Feb 26 10:30:32 2015
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:neutron-l3-agent-OUTPUT - [0:0]
:neutron-l3-agent-POSTROUTING - [0:0]
:neutron-l3-agent-PREROUTING - [0:0]
:neutron-l3-agent-float-snat - [0:0]
:neutron-l3-agent-snat - [0:0]
:neutron-postrouting-bottom - [0:0]
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 
-j REDIRECT --to-ports 9697
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on 
outgoing traffic." -j neutron-l3-agent-snat
COMMIT
# Completed on Thu Feb 26 10:30:32 2015

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1425887

Title:
  Setting 'enable_snat' be false does not work in DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I create a DVR with 'enable_snat' false, but the snat namespace also
  is create on 'dvr_snat' node:

  root@shz-vpn02:/var/log/neutron# neutron router-list
  
+--+--+--+-+---+
  | id   | name | external_gateway_info 

   | 
distributed | ha|
  
+--+--+--+-+---+
  | 2a3b6825-0bff-46d9-aea9-535176e78387 | dvr  | {"network_id": 
"dbed9af5-528b-4aec-b22f-d0ad8c346e02", "enable_snat": false, 
"external_fixed_ips": [{"subnet_id": "63705be9-d3db-4159-9e49-fd7e35b9c893", 
"ip_address": "172.24.4.99"}]} | True| False |

  in 'dvr_snat' node, the snat-xxx is created, but the snat rule does
  not add, so I think the snat namespace does not be created:

  root@shz-vpn01:/var/log/neutron# ip netns list
  snat-2a3b6825-0bff-46d9-aea9-535176e78387
  qrouter-2a3b6825-0bff-46d9-aea9-535176e78387

  root@shz-vpn01:/var/log/neutron# ip netns exec 
qrouter-2a3b6825-0bff-46d9-aea9-535176e78387 iptables-save -t nat
  # Generated by iptables-save v1.4.21 on Thu Feb 26 10:30:32 2015
  *nat
  :PREROUTING ACCEPT [0:0]
  :INPUT ACCEPT [0:0]
  :OUTPUT ACCEPT [0:0]
  :POSTROUTING ACCEPT [0:0]
  :neutron-l3-ag

[Yahoo-eng-team] [Bug 1384379] Re: versions resource uses host_url which may be incorrect

2015-02-25 Thread shihanzhang
** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384379

Title:
  versions resource uses host_url which may be incorrect

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  New
Status in OpenStack Compute (Nova):
  New
Status in Openstack Database (Trove):
  In Progress

Bug description:
  The versions resource constructs the links by using host_url, but the
  glance api endpoint may be behind a proxy or ssl terminator. This
  means that host_url may be incorrect. It should have a config option
  to override host_url like the other services do when constructing
  versions links.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1384379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384379] Re: versions resource uses host_url which may be incorrect

2015-02-25 Thread shihanzhang
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384379

Title:
  versions resource uses host_url which may be incorrect

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  New
Status in OpenStack Compute (Nova):
  New
Status in Openstack Database (Trove):
  In Progress

Bug description:
  The versions resource constructs the links by using host_url, but the
  glance api endpoint may be behind a proxy or ssl terminator. This
  means that host_url may be incorrect. It should have a config option
  to override host_url like the other services do when constructing
  versions links.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1384379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421055] Re: bulk create gre/vxlan network failed

2015-02-13 Thread shihanzhang
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421055

Title:
  bulk create gre/vxlan network failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I bulk create 1000 vxlan networks, it always failed,  I analyze the reason is 
bellow:
  it does not lock when it allocate segment id

  def allocate_fully_specified_segment(self, session, **raw_segment):
  network_type = self.get_type()
  try:
  with session.begin(subtransactions=True):
  alloc = (session.query(self.model).filter_by(**raw_segment).
   first())

  def allocate_partially_specified_segment(self, session,
  **filters):

  network_type = self.get_type()
  with session.begin(subtransactions=True):
  select = (session.query(self.model).
filter_by(allocated=False, **filters))

  I think it should add a lock when it allocate segment id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421049] Re: Remove dvr router interface consume much time

2015-02-12 Thread shihanzhang
hi Ed Bak, thanks for your reminding!

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421049

Title:
  Remove dvr router interface consume much time

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  In my environment, I create a DVR router with only one subnet(cidr is 
10.0.0.0/8) attached to this router, then I create 1 ports in this subnet, 
when I use 'router-interface-delete' to remove this subnet from router, it 
consume much time to return, I analyse the reason is bellow:
  1. when 'remove_router_interface', it will notify l3 agent  
'routers_updated', 
  2. in this _notification, it will schedule_routers this router
  def _notification(self, context, method, router_ids, operation,
shuffle_agents):
  """Notify all the agents that are hosting the routers."""
  plugin = manager.NeutronManager.get_service_plugins().get(
  service_constants.L3_ROUTER_NAT)
  if not plugin:
  LOG.error(_LE('No plugin for L3 routing registered. Cannot notify 
'
'agents with the message %s'), method)
  return
  if utils.is_extension_supported(
  plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
  adminContext = (context.is_admin and
  context or context.elevated())
  plugin.schedule_routers(adminContext, router_ids)
  self._agent_notification(
  context, method, router_ids, operation, shuffle_agents)
  3. in _schedule_router it will get the candidates l3 agent, but in 
'get_l3_agent_candidates' it will check 'check_ports_exist_on_l3agent'

  if agent_mode in ('legacy', 'dvr_snat') and (
  not is_router_distributed):
  candidates.append(l3_agent)
  elif is_router_distributed and agent_mode.startswith('dvr') and (
  self.check_ports_exist_on_l3agent(
  context, l3_agent, sync_router['id'])):
  candidates.append(l3_agent)

  4. but for 'remove_router_interface', it has deleted the router interface 
before do schedule, so the 'get_subnet_ids_on_router' will 
  return a empty list, then use this list as filter to get ports, if port 
number are very large, it will consume much time

  def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
  """
  This function checks for existence of dvr serviceable
  ports on the host, running the input l3agent.
  """
  subnet_ids = self.get_subnet_ids_on_router(context, router_id)

  core_plugin = manager.NeutronManager.get_plugin()
  filter = {'fixed_ips': {'subnet_id': subnet_ids}}
  ports = core_plugin.get_ports(context, filters=filter)

  so I think when 'remove_router_interface', it should not reschedule
  router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421105] [NEW] L2 population sometimes failed with multiple neutron-server

2015-02-12 Thread shihanzhang
Public bug reported:

In my environment with two neutron-server, 'mechanism_drivers' is openvswitch, 
l2 population is set.
When I delete a VM which is the network-A  last VM in compute node-A, I found a 
KeyError in  compute node-B openvswitch-agent log, it throws by 'del_fdb_flow':

def del_fdb_flow(self, br, port_info, remote_ip, lvm, ofport):
if port_info == q_const.FLOODING_ENTRY:
lvm.tun_ofports.remove(ofport)
if len(lvm.tun_ofports) > 0:
ofports = _ofport_set_to_str(lvm.tun_ofports)
br.mod_flow(table=constants.FLOOD_TO_TUN,
dl_vlan=lvm.vlan,
actions="strip_vlan,set_tunnel:%s,output:%s" %
(lvm.segmentation_id, ofports))

the reason is that openvswitch-agent  receives two RPC request
'fdb_remove', why it receives twice, I think the reason is that:

there are two neutron-server: neutron-serverA, neutron-serverB, one compute 
node-A
1. nova delete VM which is in compute node-A, it will firstly delete the TAP 
device, then the ovs scans the port is deleted, it send RPC request 
'update_device_down' to  neutron-serverA, when neutron-serverA receive this 
request, l2 population will firstly send 'fdb_remove'
2. after nova delete the TAP device, it send REST API request 'delete_port' to 
neutron-serveB, the l2 population send second 'fdb_remove' RPC request
when ovs agent receive the second  'fdb_remove', it del_fdb_flow, the 
'lvm.tun_ofports.remove(ofport)' throw KeyError, because 
the ofport is deleted in first request

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421105

Title:
  L2 population sometimes failed with multiple neutron-server

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In my environment with two neutron-server, 'mechanism_drivers' is 
openvswitch, l2 population is set.
  When I delete a VM which is the network-A  last VM in compute node-A, I found 
a KeyError in  compute node-B openvswitch-agent log, it throws by 
'del_fdb_flow':

  def del_fdb_flow(self, br, port_info, remote_ip, lvm, ofport):
  if port_info == q_const.FLOODING_ENTRY:
  lvm.tun_ofports.remove(ofport)
  if len(lvm.tun_ofports) > 0:
  ofports = _ofport_set_to_str(lvm.tun_ofports)
  br.mod_flow(table=constants.FLOOD_TO_TUN,
  dl_vlan=lvm.vlan,
  actions="strip_vlan,set_tunnel:%s,output:%s" %
  (lvm.segmentation_id, ofports))

  the reason is that openvswitch-agent  receives two RPC request
  'fdb_remove', why it receives twice, I think the reason is that:

  there are two neutron-server: neutron-serverA, neutron-serverB, one compute 
node-A
  1. nova delete VM which is in compute node-A, it will firstly delete the TAP 
device, then the ovs scans the port is deleted, it send RPC request 
'update_device_down' to  neutron-serverA, when neutron-serverA receive this 
request, l2 population will firstly send 'fdb_remove'
  2. after nova delete the TAP device, it send REST API request 'delete_port' 
to neutron-serveB, the l2 population send second 'fdb_remove' RPC request
  when ovs agent receive the second  'fdb_remove', it del_fdb_flow, the 
'lvm.tun_ofports.remove(ofport)' throw KeyError, because 
  the ofport is deleted in first request

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421089] [NEW] no any index for port DB sqlalchemy

2015-02-11 Thread shihanzhang
Public bug reported:

Now there is not any index for port  DB sqlalchemy, but for nova bulk create 
VM, it frequently select port according to 'tenant_id','network_id' 
,'device_id', if no index, it will consume much time in db operation, so I 
think it is better to add index for
port 'tenant_id','network_id' ,'device_id'

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421089

Title:
  no any index for port DB sqlalchemy

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now there is not any index for port  DB sqlalchemy, but for nova bulk create 
VM, it frequently select port according to 'tenant_id','network_id' 
,'device_id', if no index, it will consume much time in db operation, so I 
think it is better to add index for
  port 'tenant_id','network_id' ,'device_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421055] [NEW] bulk create gre/vxlan network failed

2015-02-11 Thread shihanzhang
Public bug reported:

I bulk create 1000 vxlan networks, it always failed,  I analyze the reason is 
bellow:
it does not lock when it allocate segment id

def allocate_fully_specified_segment(self, session, **raw_segment):
network_type = self.get_type()
try:
with session.begin(subtransactions=True):
alloc = (session.query(self.model).filter_by(**raw_segment).
 first())

def allocate_partially_specified_segment(self, session, **filters):

network_type = self.get_type()
with session.begin(subtransactions=True):
select = (session.query(self.model).
  filter_by(allocated=False, **filters))

I think it should add a lock when it allocate segment id

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421055

Title:
  bulk create gre/vxlan network failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I bulk create 1000 vxlan networks, it always failed,  I analyze the reason is 
bellow:
  it does not lock when it allocate segment id

  def allocate_fully_specified_segment(self, session, **raw_segment):
  network_type = self.get_type()
  try:
  with session.begin(subtransactions=True):
  alloc = (session.query(self.model).filter_by(**raw_segment).
   first())

  def allocate_partially_specified_segment(self, session,
  **filters):

  network_type = self.get_type()
  with session.begin(subtransactions=True):
  select = (session.query(self.model).
filter_by(allocated=False, **filters))

  I think it should add a lock when it allocate segment id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421049] [NEW] Remove dvr router interface consume much time

2015-02-11 Thread shihanzhang
Public bug reported:

In my environment, I create a DVR router with only one subnet(cidr is 
10.0.0.0/8) attached to this router, then I create 1 ports in this subnet, 
when I use 'router-interface-delete' to remove this subnet from router, it 
consume much time to return, I analyse the reason is bellow:
1. when 'remove_router_interface', it will notify l3 agent  'routers_updated', 
2. in this _notification, it will schedule_routers this router
def _notification(self, context, method, router_ids, operation,
  shuffle_agents):
"""Notify all the agents that are hosting the routers."""
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
if not plugin:
LOG.error(_LE('No plugin for L3 routing registered. Cannot notify '
  'agents with the message %s'), method)
return
if utils.is_extension_supported(
plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
adminContext = (context.is_admin and
context or context.elevated())
plugin.schedule_routers(adminContext, router_ids)
self._agent_notification(
context, method, router_ids, operation, shuffle_agents)
3. in _schedule_router it will get the candidates l3 agent, but in 
'get_l3_agent_candidates' it will check 'check_ports_exist_on_l3agent'

if agent_mode in ('legacy', 'dvr_snat') and (
not is_router_distributed):
candidates.append(l3_agent)
elif is_router_distributed and agent_mode.startswith('dvr') and (
self.check_ports_exist_on_l3agent(
context, l3_agent, sync_router['id'])):
candidates.append(l3_agent)

4. but for 'remove_router_interface', it has deleted the router interface 
before do schedule, so the 'get_subnet_ids_on_router' will 
return a empty list, then use this list as filter to get ports, if port number 
are very large, it will consume much time

def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
"""
This function checks for existence of dvr serviceable
ports on the host, running the input l3agent.
"""
subnet_ids = self.get_subnet_ids_on_router(context, router_id)

core_plugin = manager.NeutronManager.get_plugin()
filter = {'fixed_ips': {'subnet_id': subnet_ids}}
        ports = core_plugin.get_ports(context, filters=filter)

so I think when 'remove_router_interface', it should not reschedule
router

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421049

Title:
  Remove dvr router interface consume much time

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In my environment, I create a DVR router with only one subnet(cidr is 
10.0.0.0/8) attached to this router, then I create 1 ports in this subnet, 
when I use 'router-interface-delete' to remove this subnet from router, it 
consume much time to return, I analyse the reason is bellow:
  1. when 'remove_router_interface', it will notify l3 agent  
'routers_updated', 
  2. in this _notification, it will schedule_routers this router
  def _notification(self, context, method, router_ids, operation,
shuffle_agents):
  """Notify all the agents that are hosting the routers."""
  plugin = manager.NeutronManager.get_service_plugins().get(
  service_constants.L3_ROUTER_NAT)
  if not plugin:
  LOG.error(_LE('No plugin for L3 routing registered. Cannot notify 
'
'agents with the message %s'), method)
  return
  if utils.is_extension_supported(
  plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
  adminContext = (context.is_admin and
  context or context.elevated())
  plugin.schedule_routers(adminContext, router_ids)
  self._agent_notification(
  context, method, router_ids, operation, shuffle_agents)
  3. in _schedule_router it will get the candidates l3 agent, but in 
'get_l3_agent_candidates' it will check 'check_ports_exist_on_l3agent'

  if agent_mode in ('legacy', 'dvr_snat') and (

[Yahoo-eng-team] [Bug 1417409] Re: Add dvr router to dvr_snat agent failed

2015-02-03 Thread shihanzhang
hi Itzik Brown, thanks for your reminding, it is same as 1369721, I will
invalid this!

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417409

Title:
  Add dvr router to dvr_snat agent failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  reproduce progress:
  1. create a dvr router
  2. 'router-interface-add' add a subnet to this dvr
  3. 'router-gateway-set'
  4. 'l3-agent-router-remove' remove the dvr_snat agent from the router
  5. l3-agent-router-add this dvr_snat agent to the router

  the error log in neutron-server is bellow:
  create failed (client error): Agent 3b61ea90-8373-4609-adda-c10118401f4a is n
  ot a L3 Agent or has been disabled

  the reason is that when it check the ports on l3 agent, if l3_agent['host'] 
!= port['binding:host_id']), this agent will be pass, but for
  a dvr_snat  agent, there will no port on it
  def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
  """
  This function checks for existence of dvr serviceable
  ports on the host, running the input l3agent.
  """
  subnet_ids = self.get_subnet_ids_on_router(context, router_id)

  core_plugin = manager.NeutronManager.get_plugin()
  filter = {'fixed_ips': {'subnet_id': subnet_ids}}
  ports = core_plugin.get_ports(context, filters=filter)
  for port in ports:
  if (n_utils.is_dvr_serviced(port['device_owner']) and
  l3_agent['host'] == port['binding:host_id']):
  return True

  return False

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417409] [NEW] Add dvr router to dvr_snat agent failed

2015-02-02 Thread shihanzhang
Public bug reported:

reproduce progress:
1. create a dvr router
2. 'router-interface-add' add a subnet to this dvr
3. 'router-gateway-set'
4. 'l3-agent-router-remove' remove the dvr_snat agent from the router
5. l3-agent-router-add this dvr_snat agent to the router

the error log in neutron-server is bellow:
create failed (client error): Agent 3b61ea90-8373-4609-adda-c10118401f4a is n
ot a L3 Agent or has been disabled

the reason is that when it check the ports on l3 agent, if l3_agent['host'] != 
port['binding:host_id']), this agent will be pass, but for
a dvr_snat  agent, there will no port on it
def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
"""
This function checks for existence of dvr serviceable
ports on the host, running the input l3agent.
"""
subnet_ids = self.get_subnet_ids_on_router(context, router_id)

core_plugin = manager.NeutronManager.get_plugin()
filter = {'fixed_ips': {'subnet_id': subnet_ids}}
ports = core_plugin.get_ports(context, filters=filter)
for port in ports:
if (n_utils.is_dvr_serviced(port['device_owner']) and
l3_agent['host'] == port['binding:host_id']):
return True

return False

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417409

Title:
  Add dvr router to dvr_snat agent failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  reproduce progress:
  1. create a dvr router
  2. 'router-interface-add' add a subnet to this dvr
  3. 'router-gateway-set'
  4. 'l3-agent-router-remove' remove the dvr_snat agent from the router
  5. l3-agent-router-add this dvr_snat agent to the router

  the error log in neutron-server is bellow:
  create failed (client error): Agent 3b61ea90-8373-4609-adda-c10118401f4a is n
  ot a L3 Agent or has been disabled

  the reason is that when it check the ports on l3 agent, if l3_agent['host'] 
!= port['binding:host_id']), this agent will be pass, but for
  a dvr_snat  agent, there will no port on it
  def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
  """
  This function checks for existence of dvr serviceable
  ports on the host, running the input l3agent.
  """
  subnet_ids = self.get_subnet_ids_on_router(context, router_id)

  core_plugin = manager.NeutronManager.get_plugin()
  filter = {'fixed_ips': {'subnet_id': subnet_ids}}
  ports = core_plugin.get_ports(context, filters=filter)
  for port in ports:
  if (n_utils.is_dvr_serviced(port['device_owner']) and
  l3_agent['host'] == port['binding:host_id']):
  return True

  return False

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416933] [NEW] OVS: Race condition in Ha router updating port status

2015-02-01 Thread shihanzhang
Public bug reported:

When L2 agent call 'get_devices_details_list', the ports in this l2 agent will 
firstly be updated to BUILD, then 'update_device_up' will update them to 
ACTIVE, but for a Ha router which has two l3 agents, there will be race 
condition.
reproduce progress(not always happen, but much time):
1.  'router-interface-add' add a subnet to Ha router
2.  'router-gateway-set' set router gateway
the gateway port status sometimes will always be BUILD

in 'get_device_details', the port status will be update, but I think if
a port status is ACTIVE and port['admin_state_up'] is True, this port
should not be update,

def get_device_details(self, rpc_context, **kwargs):
..
..
new_status = (q_const.PORT_STATUS_BUILD if port['admin_state_up']
  else q_const.PORT_STATUS_DOWN)
if port['status'] != new_status:
plugin.update_port_status(rpc_context,
  port_id,
  new_status,
      host)

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
     Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416933

Title:
  OVS: Race condition in Ha router updating port status

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When L2 agent call 'get_devices_details_list', the ports in this l2 agent 
will firstly be updated to BUILD, then 'update_device_up' will update them to 
ACTIVE, but for a Ha router which has two l3 agents, there will be race 
condition.
  reproduce progress(not always happen, but much time):
  1.  'router-interface-add' add a subnet to Ha router
  2.  'router-gateway-set' set router gateway
  the gateway port status sometimes will always be BUILD

  in 'get_device_details', the port status will be update, but I think
  if a port status is ACTIVE and port['admin_state_up'] is True, this
  port should not be update,

  def get_device_details(self, rpc_context, **kwargs):
  ..
  ..
  new_status = (q_const.PORT_STATUS_BUILD if port['admin_state_up']
else q_const.PORT_STATUS_DOWN)
  if port['status'] != new_status:
  plugin.update_port_status(rpc_context,
port_id,
new_status,
host)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416326] Re: Add l3 agent to Ha router failed

2015-01-31 Thread shihanzhang
ok, thanks for your reminding!

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416326

Title:
  Add l3 agent to Ha router failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  
  reproduce progress:
  1. create a Ha router(min_l3_agents_per_router=2)
  2. use 'l3-agent-router-remove' remove a agent from router
  3. use 'l3-agent-router-add' add the deleted agent to router

  you will find the error in neutron-server:
  2015-01-30 09:38:47.126 26402 INFO neutron.api.v2.resource 
[req-23ade291-165e-4c24-899f-40062005b216 None] create failed (client error): 
The router a081ae1d-ad5b-41b5-a60a-6c129ba3efa
  b has been already hosted by the L3 Agent 
3b61ea90-8373-4609-adda-c10118401f4a.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416326] [NEW] Add l3 agent to Ha router failed

2015-01-30 Thread shihanzhang
Public bug reported:


reproduce progress:
1. create a Ha router(min_l3_agents_per_router=2)
2. use 'l3-agent-router-remove' remove a agent from router
3. use 'l3-agent-router-add' add the deleted agent to router

you will find the error in neutron-server:
2015-01-30 09:38:47.126 26402 INFO neutron.api.v2.resource 
[req-23ade291-165e-4c24-899f-40062005b216 None] create failed (client error): 
The router a081ae1d-ad5b-41b5-a60a-6c129ba3efa
b has been already hosted by the L3 Agent 3b61ea90-8373-4609-adda-c10118401f4a.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416326

Title:
  Add l3 agent to Ha router failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  reproduce progress:
  1. create a Ha router(min_l3_agents_per_router=2)
  2. use 'l3-agent-router-remove' remove a agent from router
  3. use 'l3-agent-router-add' add the deleted agent to router

  you will find the error in neutron-server:
  2015-01-30 09:38:47.126 26402 INFO neutron.api.v2.resource 
[req-23ade291-165e-4c24-899f-40062005b216 None] create failed (client error): 
The router a081ae1d-ad5b-41b5-a60a-6c129ba3efa
  b has been already hosted by the L3 Agent 
3b61ea90-8373-4609-adda-c10118401f4a.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416315] [NEW] delete ip rule in dvr agent failed

2015-01-30 Thread shihanzhang
Public bug reported:


when I create a dvr router, I find the error log in l3 agent:

2015-01-30 09:04:41.979 26426 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qr
outer-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda', 'ip', 'rule', 'del', 'priority', 
'None'] create_process /opt/stack/neutron/neutron/agent/linux/utils.py:46
2015-01-30 09:04:42.238 26426 ERROR neutron.agent.linux.utils [-]
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda', 'ip', 'rule', 'del', 'priorit
y', 'None']
Exit code: 255
Stdout: ''
Stderr: 'Error: argument "None" is wrong: preference value is invalid\n\n'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416315

Title:
  delete ip rule in dvr agent failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  when I create a dvr router, I find the error log in l3 agent:

  2015-01-30 09:04:41.979 26426 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qr
  outer-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda', 'ip', 'rule', 'del', 'priority', 
'None'] create_process /opt/stack/neutron/neutron/agent/linux/utils.py:46
  2015-01-30 09:04:42.238 26426 ERROR neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda', 'ip', 'rule', 'del', 'priorit
  y', 'None']
  Exit code: 255
  Stdout: ''
  Stderr: 'Error: argument "None" is wrong: preference value is invalid\n\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416306] [NEW] duplicate routes in the namespace of dvr router agent

2015-01-30 Thread shihanzhang
Public bug reported:

when I create a dvr router, I found there are duplicate routes in router
namespace:

root@devstack:~# ip netns exec qrouter-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda ip 
rule
0:  from all lookup local 
32766:  from all lookup main 
32767:  from all lookup default 
32768:  from 10.0.0.3 lookup 16 
32769:  from 20.20.20.5 lookup 16 
167772161:  from 10.0.0.1/24 lookup 167772161 
167772161:  from 10.0.0.1/24 lookup 167772161 
167772161:  from 10.0.0.1/24 lookup 167772161 
167772161:  from 10.0.0.1/24 lookup 167772161 
167772161:  from 10.0.0.1/24 lookup 167772161 
336860161:  from 20.20.20.1/24 lookup 336860161 
336860161:  from 20.20.20.1/24 lookup 336860161 
336860161:  from 20.20.20.1/24 lookup 336860161 
336860161:  from 20.20.20.1/24 lookup 336860161 
336860161:  from 20.20.20.1/24 lookup 336860161

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416306

Title:
  duplicate routes in the namespace of dvr router agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when I create a dvr router, I found there are duplicate routes in
  router namespace:

  root@devstack:~# ip netns exec qrouter-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda 
ip rule
  0:  from all lookup local 
  32766:  from all lookup main 
  32767:  from all lookup default 
  32768:  from 10.0.0.3 lookup 16 
  32769:  from 20.20.20.5 lookup 16 
  167772161:  from 10.0.0.1/24 lookup 167772161 
  167772161:  from 10.0.0.1/24 lookup 167772161 
  167772161:  from 10.0.0.1/24 lookup 167772161 
  167772161:  from 10.0.0.1/24 lookup 167772161 
  167772161:  from 10.0.0.1/24 lookup 167772161 
  336860161:  from 20.20.20.1/24 lookup 336860161 
  336860161:  from 20.20.20.1/24 lookup 336860161 
  336860161:  from 20.20.20.1/24 lookup 336860161 
  336860161:  from 20.20.20.1/24 lookup 336860161 
  336860161:  from 20.20.20.1/24 lookup 336860161

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416278] [NEW] Ha router should not schedule to 'dvr_snat' agent

2015-01-29 Thread shihanzhang
Public bug reported:


Ha router should not schedule to 'dvr_snat' agent, but in 
'get_l3_agent_candidates', it allow a Ha router to 'dvr-snat' agent,

def get_l3_agent_candidates(self, context, sync_router, l3_agents):
"""Get the valid l3 agents for the router from a list of l3_agents."""
candidates = []
for l3_agent in l3_agents:
if not l3_agent.admin_state_up:
continue
agent_conf = self.get_configuration_dict(l3_agent)
router_id = agent_conf.get('router_id', None)
use_namespaces = agent_conf.get('use_namespaces', True)
handle_internal_only_routers = agent_conf.get(
'handle_internal_only_routers', True)
gateway_external_network_id = agent_conf.get(
'gateway_external_network_id', None)
agent_mode = agent_conf.get('agent_mode', 'legacy')
if not use_namespaces and router_id != sync_router['id']:
continue
ex_net_id = (sync_router['external_gateway_info'] or {}).get(
'network_id')
if ((not ex_net_id and not handle_internal_only_routers) or
(ex_net_id and gateway_external_network_id and
 ex_net_id != gateway_external_network_id)):
continue
is_router_distributed = sync_router.get('distributed', False)
if agent_mode in ('legacy', 'dvr_snat') and (
not is_router_distributed):
    candidates.append(l3_agent)

so  'if agent_mode in ('legacy', 'dvr_snat') ' should be 'if agent_mode
== 'legacy''

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416278

Title:
  Ha router should not schedule to 'dvr_snat' agent

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
  Ha router should not schedule to 'dvr_snat' agent, but in 
'get_l3_agent_candidates', it allow a Ha router to 'dvr-snat' agent,

  def get_l3_agent_candidates(self, context, sync_router, l3_agents):
  """Get the valid l3 agents for the router from a list of l3_agents."""
  candidates = []
  for l3_agent in l3_agents:
  if not l3_agent.admin_state_up:
  continue
  agent_conf = self.get_configuration_dict(l3_agent)
  router_id = agent_conf.get('router_id', None)
  use_namespaces = agent_conf.get('use_namespaces', True)
  handle_internal_only_routers = agent_conf.get(
  'handle_internal_only_routers', True)
  gateway_external_network_id = agent_conf.get(
  'gateway_external_network_id', None)
  agent_mode = agent_conf.get('agent_mode', 'legacy')
  if not use_namespaces and router_id != sync_router['id']:
  continue
  ex_net_id = (sync_router['external_gateway_info'] or {}).get(
  'network_id')
  if ((not ex_net_id and not handle_internal_only_routers) or
  (ex_net_id and gateway_external_network_id and
   ex_net_id != gateway_external_network_id)):
  continue
  is_router_distributed = sync_router.get('distributed', False)
  if agent_mode in ('legacy', 'dvr_snat') and (
  not is_router_distributed):
  candidates.append(l3_agent)

  so  'if agent_mode in ('legacy', 'dvr_snat') ' should be 'if
  agent_mode  == 'legacy''

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416181] [NEW] 'router_gateway' port status is always DOWN

2015-01-29 Thread shihanzhang
Public bug reported:

If br-ex does not set 'bridge-id', the 'router_gateway' status will be
always DOWN, the reason is that:

def setup_ancillary_bridges(self, integ_br, tun_br):
'''Setup ancillary bridges - for example br-ex.'''
ovs = ovs_lib.BaseOVS(self.root_helper)
ovs_bridges = set(ovs.get_bridges())
# Remove all known bridges
ovs_bridges.remove(integ_br)
if self.enable_tunneling:
ovs_bridges.remove(tun_br)
br_names = [self.phys_brs[physical_network].br_name for
physical_network in self.phys_brs]
ovs_bridges.difference_update(br_names)
# Filter list of bridges to those that have external
# bridge-id's configured
br_names = []
for bridge in ovs_bridges:
bridge_id = ovs.get_bridge_external_bridge_id(bridge)
if bridge_id != bridge:
br_names.append(bridge)

if br-ex does not set 'bridge-id', ovs agent will not add it to
ancillary_bridges, so I think if br-ex does not set 'bridge-id', it just
report a warning message is ok!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416181

Title:
  'router_gateway' port status is always DOWN

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If br-ex does not set 'bridge-id', the 'router_gateway' status will be
  always DOWN, the reason is that:

  def setup_ancillary_bridges(self, integ_br, tun_br):
  '''Setup ancillary bridges - for example br-ex.'''
  ovs = ovs_lib.BaseOVS(self.root_helper)
  ovs_bridges = set(ovs.get_bridges())
  # Remove all known bridges
  ovs_bridges.remove(integ_br)
  if self.enable_tunneling:
  ovs_bridges.remove(tun_br)
  br_names = [self.phys_brs[physical_network].br_name for
  physical_network in self.phys_brs]
  ovs_bridges.difference_update(br_names)
  # Filter list of bridges to those that have external
  # bridge-id's configured
  br_names = []
  for bridge in ovs_bridges:
  bridge_id = ovs.get_bridge_external_bridge_id(bridge)
  if bridge_id != bridge:
  br_names.append(bridge)

  if br-ex does not set 'bridge-id', ovs agent will not add it to
  ancillary_bridges, so I think if br-ex does not set 'bridge-id', it
  just report a warning message is ok!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413512] [NEW] 'interface-attach' SRIOV failed

2015-01-22 Thread shihanzhang
Public bug reported:

Now nova 'interface-attach' does not support SRIOV nic, I think that
'cold-plug' should support

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413512

Title:
  'interface-attach' SRIOV failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now nova 'interface-attach' does not support SRIOV nic, I think that
  'cold-plug' should support

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1413512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413123] Re: L3 agent does not check if "external network bridge" exist at its begining

2015-01-21 Thread shihanzhang
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413123

Title:
  L3 agent does not check if "external network bridge"  exist at its
  begining

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Now the l3 agent check whether the ''external network bridge'' exist
  at loop:

  def _process_router_if_compatible(self, router):
  if (self.conf.external_network_bridge and
  not ip_lib.device_exists(self.conf.external_network_bridge)):
  LOG.error(_LE("The external network bridge '%s' does not exist"),
self.conf.external_network_bridge)
  return

  I think it is better to check this at l3 agent begin:
  _check_config_params(self)

  if the ''external network bridge'' does not exist, l3 agent should
  exit!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413123] [NEW] L3 agent does not check if "external network bridge" exist at its begining

2015-01-21 Thread shihanzhang
Public bug reported:

Now the l3 agent check whether the ''external network bridge'' exist at
loop:

def _process_router_if_compatible(self, router):
if (self.conf.external_network_bridge and
not ip_lib.device_exists(self.conf.external_network_bridge)):
LOG.error(_LE("The external network bridge '%s' does not exist"),
  self.conf.external_network_bridge)
return

I think it is better to check this at l3 agent begin:
_check_config_params(self)

if the ''external network bridge'' does not exist, l3 agent should exit!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413123

Title:
  L3 agent does not check if "external network bridge"  exist at its
  begining

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now the l3 agent check whether the ''external network bridge'' exist
  at loop:

  def _process_router_if_compatible(self, router):
  if (self.conf.external_network_bridge and
  not ip_lib.device_exists(self.conf.external_network_bridge)):
  LOG.error(_LE("The external network bridge '%s' does not exist"),
self.conf.external_network_bridge)
  return

  I think it is better to check this at l3 agent begin:
  _check_config_params(self)

  if the ''external network bridge'' does not exist, l3 agent should
  exit!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408529] [NEW] nova boot vm with '--nic net-id=xxxx, v4-fixed-ip=xxx'

2015-01-07 Thread shihanzhang
Public bug reported:

now nova boot vm with '--nic net-id=, v4-fixed-ip=xxx' will failed,
the error in nova-compute log is bellow:

  File "/opt/stack/nova/nova/network/neutronv2/__init__.py", line 84
, in wrapper
ret = obj(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/cl
ient.py", line 1266, in serialize
self.get_attr_metadata()).serialize(data, self.content_type())
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
serializer.py", line 390, in serialize
return self._get_serialize_handler(content_type).serialize(data)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
serializer.py", line 54, in serialize
return self.dispatch(data, action=action)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
serializer.py", line 44, in dispatch
return action_method(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
serializer.py", line 66, in default
return jsonutils.dumps(data, default=sanitizer)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/opensta
ck/common/jsonutils.py", line 168, in dumps
return json.dumps(value, default=default, **kwargs)
  File "/usr/lib/python2.7/json/__init__.py", line 250, in dumps
sort_keys=sort_keys, **kw).encode(obj)
  File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
serializer.py", line 65, in sanitizer
return six.text_type(obj, 'utf8')
TypeError: coercing to Unicode: need string or buffer, IPAddress fou
nd

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New


** Tags: python-neutronclient

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

** Tags added: python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408529

Title:
  nova boot vm with '--nic net-id=, v4-fixed-ip=xxx'

Status in OpenStack Compute (Nova):
  New

Bug description:
  now nova boot vm with '--nic net-id=, v4-fixed-ip=xxx' will
  failed, the error in nova-compute log is bellow:

File "/opt/stack/nova/nova/network/neutronv2/__init__.py", line 84
  , in wrapper
  ret = obj(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/cl
  ient.py", line 1266, in serialize
  self.get_attr_metadata()).serialize(data, self.content_type())
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 390, in serialize
  return self._get_serialize_handler(content_type).serialize(data)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 54, in serialize
  return self.dispatch(data, action=action)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 44, in dispatch
  return action_method(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 66, in default
  return jsonutils.dumps(data, default=sanitizer)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/opensta
  ck/common/jsonutils.py", line 168, in dumps
  return json.dumps(value, default=default, **kwargs)
File "/usr/lib/python2.7/json/__init__.py", line 250, in dumps
  sort_keys=sort_keys, **kw).encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
  chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
  return _iterencode(o, 0)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 65, in sanitizer
  return six.text_type(obj, 'utf8')
  TypeError: coercing to Unicode: need string or buffer, IPAddress fou
  nd

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401823] Re: if L3 agent is down, it should not schedule router on it

2014-12-24 Thread shihanzhang
ok,  I agree with you, the 'get_routers' is a call() method!

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401823

Title:
  if L3 agent is down, it should not schedule router on it

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  
  when neutron server schedule routers, if l3 agent is down, it should not 
schedule router on it,

  def auto_schedule_routers(self, plugin, context, host, router_ids):
  l3_agent = plugin.get_enabled_agent_on_host(
  context, constants.AGENT_TYPE_L3, host)
  if not l3_agent:
  return False

  if l3 agent is down, the 'plugin.get_enabled_agent_on_host' should be
  None,

  def get_enabled_agent_on_host(self, context, agent_type, host):
  """Return agent of agent_type for the specified host."""
  query = context.session.query(Agent)
  query = query.filter(Agent.agent_type == agent_type,
   Agent.host == host,
   Agent.admin_state_up == sql.true())
  try:
  agent = query.one()
  except exc.NoResultFound:
  LOG.debug('No enabled %(agent_type)s agent on host '
'%(host)s' % {'agent_type': agent_type, 'host': host})
  return
  if self.is_agent_down(agent.heartbeat_timestamp):
  LOG.warn(_LW('%(agent_type)s agent %(agent_id)s is not active'),
   {'agent_type': agent_type, 'agent_id': agent.id})
  return agent

  so,  ' if self.is_agent_down', it should return None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405375] [NEW] Ha router should not have l3 agent less than "min_l3_agents_per_router"

2014-12-24 Thread shihanzhang
Public bug reported:

when a router is ha, it should bind to at least
"min_l3_agents_per_router" l3 agents, but now if a ha router has bind to
two l3 agent, and the "min_l3_agents_per_router=2",  it also can remove
a l3 agent by "l3-agent-router-remove".

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405375

Title:
  Ha router should not have l3 agent less than
  "min_l3_agents_per_router"

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when a router is ha, it should bind to at least
  "min_l3_agents_per_router" l3 agents, but now if a ha router has bind
  to two l3 agent, and the "min_l3_agents_per_router=2",  it also can
  remove a l3 agent by "l3-agent-router-remove".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401823] [NEW] if L3 agent is down, it should not schedule router on it

2014-12-12 Thread shihanzhang
Public bug reported:


when neutron server schedule routers, if l3 agent is down, it should not 
schedule router on it,

def auto_schedule_routers(self, plugin, context, host, router_ids):
l3_agent = plugin.get_enabled_agent_on_host(
context, constants.AGENT_TYPE_L3, host)
if not l3_agent:
return False

if l3 agent is down, the 'plugin.get_enabled_agent_on_host' should be
None,

def get_enabled_agent_on_host(self, context, agent_type, host):
"""Return agent of agent_type for the specified host."""
query = context.session.query(Agent)
query = query.filter(Agent.agent_type == agent_type,
 Agent.host == host,
 Agent.admin_state_up == sql.true())
try:
agent = query.one()
except exc.NoResultFound:
LOG.debug('No enabled %(agent_type)s agent on host '
  '%(host)s' % {'agent_type': agent_type, 'host': host})
return
if self.is_agent_down(agent.heartbeat_timestamp):
LOG.warn(_LW('%(agent_type)s agent %(agent_id)s is not active'),
 {'agent_type': agent_type, 'agent_id': agent.id})
return agent

so,  ' if self.is_agent_down', it should return None

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401823

Title:
  if L3 agent is down, it should not schedule router on it

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  when neutron server schedule routers, if l3 agent is down, it should not 
schedule router on it,

  def auto_schedule_routers(self, plugin, context, host, router_ids):
  l3_agent = plugin.get_enabled_agent_on_host(
  context, constants.AGENT_TYPE_L3, host)
  if not l3_agent:
  return False

  if l3 agent is down, the 'plugin.get_enabled_agent_on_host' should be
  None,

  def get_enabled_agent_on_host(self, context, agent_type, host):
  """Return agent of agent_type for the specified host."""
  query = context.session.query(Agent)
  query = query.filter(Agent.agent_type == agent_type,
   Agent.host == host,
   Agent.admin_state_up == sql.true())
  try:
  agent = query.one()
  except exc.NoResultFound:
  LOG.debug('No enabled %(agent_type)s agent on host '
'%(host)s' % {'agent_type': agent_type, 'host': host})
  return
  if self.is_agent_down(agent.heartbeat_timestamp):
  LOG.warn(_LW('%(agent_type)s agent %(agent_id)s is not active'),
   {'agent_type': agent_type, 'agent_id': agent.id})
  return agent

  so,  ' if self.is_agent_down', it should return None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373287] [NEW] Security group doesn't work when L2 agent enable ipset

2014-09-24 Thread shihanzhang
Public bug reported:

In bellow case, if L2 agent enable ipset, security group does not work:

1. create new security group with IPv6 ingress rule (and no IPv4)
2. launch an instance in this security group

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373287

Title:
  Security group doesn't work when L2 agent enable ipset

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In bellow case, if L2 agent enable ipset, security group does not
  work:

  1. create new security group with IPv6 ingress rule (and no IPv4)
  2. launch an instance in this security group

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372337] [NEW] Security group KeyError

2014-09-22 Thread shihanzhang
Public bug reported:

In bellow case, there will be a KeyError in neutron server:
1. create a port with 'IPv4' address  in default security group
2. delete the default rule 'IPv4' ingress
then the neutron server will happen KeyError

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372337

Title:
  Security group KeyError

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In bellow case, there will be a KeyError in neutron server:
  1. create a port with 'IPv4' address  in default security group
  2. delete the default rule 'IPv4' ingress
  then the neutron server will happen KeyError

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371435] [NEW] Remove unnecessary iptables reload when L2 agent enable ipset

2014-09-18 Thread shihanzhang
Public bug reported:

When l2 agent enables ipset, if a security group just update its members,  
iptables should not be reloaded, it just need to add members to ipset chain.
there is a room to improve!

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371435

Title:
  Remove unnecessary iptables reload when L2 agent enable ipset

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When l2 agent enables ipset, if a security group just update its members,  
iptables should not be reloaded, it just need to add members to ipset chain.
  there is a room to improve!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369431] [NEW] Don't create ipset chain if corresponding security group has no member

2014-09-15 Thread shihanzhang
Public bug reported:

when a security group has bellow rule, it should not create ipset chain:
security group id is: fake_sgid, it has rule bellow:
{'direction': 'ingress', 'remote_group_id': 'fake_sgid2'}
but the security group:fake_sgid2 has no member, so when the port in security 
group:fake_sgid should not create corresponding ipset chain

root@devstack:/opt/stack/neutron# ipset list
Name: IPv409040f9f-cb86-4f72-a
Type: hash:ip
Revision: 2
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 16520
References: 1
Members:
20.20.20.11

Name: IPv609040f9f-cb86-4f72-a
Type: hash:ip
Revision: 2
Header: family inet6 hashsize 1024 maxelem 65536
Size in memory: 16504
References: 1
Members:

because the security group:09040f9f-cb86-4f72-af74-4de4f2b86442 has no
ipv6 member, so it should't create ipset chain:IPv609040f9f-cb86-4f72-a

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369431

Title:
  Don't create ipset chain if corresponding security group has  no
  member

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  when a security group has bellow rule, it should not create ipset chain:
  security group id is: fake_sgid, it has rule bellow:
  {'direction': 'ingress', 'remote_group_id': 'fake_sgid2'}
  but the security group:fake_sgid2 has no member, so when the port in security 
group:fake_sgid should not create corresponding ipset chain

  root@devstack:/opt/stack/neutron# ipset list
  Name: IPv409040f9f-cb86-4f72-a
  Type: hash:ip
  Revision: 2
  Header: family inet hashsize 1024 maxelem 65536
  Size in memory: 16520
  References: 1
  Members:
  20.20.20.11

  Name: IPv609040f9f-cb86-4f72-a
  Type: hash:ip
  Revision: 2
  Header: family inet6 hashsize 1024 maxelem 65536
  Size in memory: 16504
  References: 1
  Members:

  because the security group:09040f9f-cb86-4f72-af74-4de4f2b86442 has no
  ipv6 member, so it should't create ipset chain:IPv609040f9f-
  cb86-4f72-a

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335375] [NEW] ping still working once connected even after related security group rule is deleted

2014-06-27 Thread shihanzhang
Public bug reported:

After we create an ICMP rule for a security group, even though we delete
this rule,  the VM in this security grou ping still working once
connected, there is a same problem in floatingIP, bug#1334926

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1335375

Title:
  ping still working once connected even after related security group
  rule is deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  After we create an ICMP rule for a security group, even though we
  delete this rule,  the VM in this security grou ping still working
  once connected, there is a same problem in floatingIP, bug#1334926

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1335375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320111] [NEW] In lb-healthmonitor-create, 'delay ' should be greater than 'timeout'

2014-05-16 Thread shihanzhang
Public bug reported:

when I use lb-healthmonitor-create, I found 'delay' can be smaller than
'timeout', It does not make sense!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1320111

Title:
  In lb-healthmonitor-create, 'delay ' should be greater than 'timeout'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when I use lb-healthmonitor-create, I found 'delay' can be smaller
  than 'timeout', It does not make sense!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1320111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320062] [NEW] LBaas health_monitor'timeout shouldn't be -1

2014-05-15 Thread shihanzhang
Public bug reported:

It should to limit the value of timiout in health_monitor, the timiout
shouldn't be -1, in extentions/loadbalancer.py, the define is:

'timeout': {'allow_post': True, 'allow_put': True,
'convert_to': attr.convert_to_int,
'is_visible': True},

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1320062

Title:
  LBaas health_monitor'timeout shouldn't be -1

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It should to limit the value of timiout in health_monitor, the timiout
  shouldn't be -1, in extentions/loadbalancer.py, the define is:

  'timeout': {'allow_post': True, 'allow_put': True,
  'convert_to': attr.convert_to_int,
  'is_visible': True},

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1320062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315820] [NEW] lbaas health monitor use PING failed

2014-05-04 Thread shihanzhang
Public bug reported:

I've enabled LBaaS with haproxy, I've configured a pool with vip and
members, the health_monitor type is 'PING', then I found the status of
members is always 'INACTIVE', the code in '_get_server_health_option' is
:

server_addon = ' check inter %(delay)ds fall %(max_retries)d' % monitor
opts = [
'timeout check %ds' % monitor['timeout']
]

if monitor['type'] in (constants.HEALTH_MONITOR_HTTP,
   constants.HEALTH_MONITOR_HTTPS):
opts.append('option httpchk %(http_method)s %(url_path)s' % monitor)
opts.append(
'http-check expect rstatus %s' %
'|'.join(_expand_expected_codes(monitor['expected_codes']))
)

if monitor['type'] == constants.HEALTH_MONITOR_HTTPS:
opts.append('option ssl-hello-chk')

I want to know whether health_monitor  support tpye of 'PING'?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1315820

Title:
  lbaas health monitor use PING failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I've enabled LBaaS with haproxy, I've configured a pool with vip and
  members, the health_monitor type is 'PING', then I found the status of
  members is always 'INACTIVE', the code in '_get_server_health_option'
  is :

  server_addon = ' check inter %(delay)ds fall %(max_retries)d' % monitor
  opts = [
  'timeout check %ds' % monitor['timeout']
  ]

  if monitor['type'] in (constants.HEALTH_MONITOR_HTTP,
 constants.HEALTH_MONITOR_HTTPS):
  opts.append('option httpchk %(http_method)s %(url_path)s' % monitor)
  opts.append(
  'http-check expect rstatus %s' %
  '|'.join(_expand_expected_codes(monitor['expected_codes']))
  )

  if monitor['type'] == constants.HEALTH_MONITOR_HTTPS:
  opts.append('option ssl-hello-chk')

  I want to know whether health_monitor  support tpye of 'PING'?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1315820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304234] [NEW] the problem of updating quota

2014-04-08 Thread shihanzhang
Public bug reported:

In nova, if you update the value of quota from large to small, it will
failed, but if use DbQuotaDriver, it will allow update the value of
quota from large to small, so I think it should do as in nova!

** Affects: cinder
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304234

Title:
  the problem of updating quota

Status in Cinder:
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In nova, if you update the value of quota from large to small, it will
  failed, but if use DbQuotaDriver, it will allow update the value of
  quota from large to small, so I think it should do as in nova!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1304234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302452] [NEW] nova unshelve VM failed

2014-04-04 Thread shihanzhang
Public bug reported:

I boot a vm with '--boot-volume volume_id ', then shelve this VM, nova
return success, but doesn't upload image to glance, then I unshelve this
vm,  it will failed and this VM'state is unshelving, so I think a VM
which is booted from volume_id is not allowed shelve!

the error log in nova-conductor is bellow:
 Traceback (most recent call last):
   File "/usr/lib64/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", 
line 461, in _process_data
 **args)
   File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
 result = getattr(proxyobj, method)(ctxt, **kwargs)
   File "/usr/lib64/python2.6/site-packages/nova/conductor/manager.py", line 
797, in unshelve_instance
 sys_meta['shelved_image_id'])
 KeyError: 'shelved_image_id'

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
     Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

** Description changed:

  I boot a vm with '--boot-volume volume_id ', then shelve this VM, nova
  return success, but doesn't upload image to glance, then I unshelve this
  vm,  it will failed and this VM'state is unshelving, so I think a VM
  which is booted from volume_id is not allowed shelve!
+ 
+ the error log in nova-conductor is bellow:
+  Traceback (most recent call last):
+File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 
461, in _process_data
+  **args)
+File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
+  result = getattr(proxyobj, method)(ctxt, **kwargs)
+File "/usr/lib64/python2.6/site-packages/nova/conductor/manager.py", line 
797, in unshelve_instance
+  sys_meta['shelved_image_id'])
+  KeyError: 'shelved_image_id'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302452

Title:
  nova unshelve VM failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  I boot a vm with '--boot-volume volume_id ', then shelve this VM, nova
  return success, but doesn't upload image to glance, then I unshelve
  this vm,  it will failed and this VM'state is unshelving, so I think a
  VM which is booted from volume_id is not allowed shelve!

  the error log in nova-conductor is bellow:
   Traceback (most recent call last):
 File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 
461, in _process_data
   **args)
 File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
   result = getattr(proxyobj, method)(ctxt, **kwargs)
 File "/usr/lib64/python2.6/site-packages/nova/conductor/manager.py", line 
797, in unshelve_instance
   sys_meta['shelved_image_id'])
   KeyError: 'shelved_image_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302275] [NEW] DHCP service start failed

2014-04-03 Thread shihanzhang
Public bug reported:

In neutron.conf, I set 'force_gateway_on_subnet = False', then create a subnet 
with gateway ip out of
 subnet' CIDR,  I found a error happen in dhcp.log, the log is bellow:

 [-] Unable to enable dhcp for af572283-3f32-4a12-9419-1154e9c386f9.
 Traceback (most recent call last):
   File "/usr/lib64/python2.6/site-packages/neutron/agent/dhcp_agent.py", line 
128, in call_driver
 getattr(driver, action)(**action_kwargs)
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py", line 
167, in enable
 reuse_existing=True)
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py", line 
727, in setup
 self._set_default_route(network)
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py", line 
615, in _set_default_route
 device.route.add_gateway(subnet.gateway_ip)
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 368, in add_gateway
 self._as_root(*args)
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 217, in _as_root
 kwargs.get('use_root_namespace', False))
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 70, in _as_root
 namespace)
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 81, in _execute
 root_helper=root_helper)
   File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/utils.py", line 
75, in execute
 raise RuntimeError(m)
 RuntimeError:
 Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-af572283-3f32-4a12-9419-1154e9c386f9', 'ip', 
'route', 'replace', 'default', 'via', '80.80.1.1', 'dev', 'tapcb420785-8d']
 Exit code: 2
 Stdout: ''
 Stderr: 'RTNETLINK answers: No such process\n'
 TRACE neutron.agent.dhcp_agent

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302275

Title:
  DHCP service start failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In neutron.conf, I set 'force_gateway_on_subnet = False', then create a 
subnet with gateway ip out of
   subnet' CIDR,  I found a error happen in dhcp.log, the log is bellow:

   [-] Unable to enable dhcp for af572283-3f32-4a12-9419-1154e9c386f9.
   Traceback (most recent call last):
 File "/usr/lib64/python2.6/site-packages/neutron/agent/dhcp_agent.py", 
line 128, in call_driver
   getattr(driver, action)(**action_kwargs)
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py", 
line 167, in enable
   reuse_existing=True)
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py", 
line 727, in setup
   self._set_default_route(network)
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py", 
line 615, in _set_default_route
   device.route.add_gateway(subnet.gateway_ip)
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 368, in add_gateway
   self._as_root(*args)
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 217, in _as_root
   kwargs.get('use_root_namespace', False))
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 70, in _as_root
   namespace)
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py", 
line 81, in _execute
   root_helper=root_helper)
 File "/usr/lib64/python2.6/site-packages/neutron/agent/linux/utils.py", 
line 75, in execute
   raise RuntimeError(m)
   RuntimeError:
   Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-af572283-3f32-4a12-9419-1154e9c386f9', 'ip', 
'route', 'replace', 'default', 'via', '80.80.1.1', 'dev', 'tapcb420785-8d']
   Exit code: 2
   Stdout: ''
   Stderr: 'RTNETLINK answers: No such process\n'
   TRACE neutron.agent.dhcp_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299317] [NEW] use 'interface-attach' without option parameter happen ERROR

2014-03-28 Thread shihanzhang
power_state': 1, 
u'default_ephemeral_device': None, u'progress': 0, u'project_id': 
u'd13fb5f6d2354320bf4767f9b71df820', u'launched_at': 
u'2014-03-27T21:03:05.00', u'scheduled_at': u'2014-03-27T21:02:59.00', 
u'node': u'ubuntu01', u'ramdisk_id': u'', u'access_ip_v6': None, 
u'access_ip_v4': None, u'deleted': False, u'key_name': None, u'updated_at': 
u'2014-03-27T21:03:05.00', u'host': u'ubuntu01', u'architecture': None, 
u'user_id': u'bcac7970f8ae41f38f79e01dece39bd8', u'system_metadata': 
{u'image_min_
 disk': u'1', u'instance_type_memory_mb': u'512', u'instance_type_swap': u'0', 
u'instance_type_vcpu_weight': None, u'instance_type_root_gb': u'1', 
u'instance_type_id': u'3', u'instance_type_name': u'm1.tiny', 
u'instance_type_ephemeral_gb': u'0', u'instance_type_rxtx_factor': u'1', 
u'instance_type_flavorid': u'1', u'image_container_format': u'bare', 
u'instance_type_vcpus': u'1', u'image_min_ram': u'0', u'image_disk_format': 
u'qcow2', u'image_base_image_ref': u'8449d5ce-bd3d-4096-8cd5-3d9e5dc5d7ee'}, 
u'task_state': None, u'shutdown_terminate': False, u'cell_name': None, 
u'root_gb': 1, u'locked': False, u'name': u'instance-0001', u'created_at': 
u'2014-03-27T21:02:59.00', u'locked_by': None, u'launch_index': 0, 
u'metadata': {}, u'memory_mb': 512, u'vcpus': 1, u'image_ref': 
u'8449d5ce-bd3d-4096-8cd5-3d9e5dc5d7ee', u'root_device_name': u'/dev/vda', 
u'auto_disk_config': False, u'os_type': None, u'config_drive': u''}

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299317

Title:
  use 'interface-attach' without option parameter happen ERROR

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  I use 'interface-attach'  without any option parameter to add vnic to a VM, 
nova return failed, but 'nova list' will see the vnic infor add to that vm, I 
do it as follow:

  root@ubuntu01:/var/log/nova# nova list
  
+--+--+++-++
  | ID   | Name | Status | Task State | Power 
State | Networks   |
  
+--+--+++-++
  | 663dc949-11f9-4aab-aaf7-6f5bd761ab6f | test | ACTIVE | None   | Running 
| test=10.10.0.5 |
  
+--+--+++-++
  root@ubuntu01:/var/log/nova# nova interface-attach test
  ERROR: Failed to attach interface (HTTP 500) (Request-ID: 
req-5af0e807-521f-45a2-a329-fd61ec74779e)
  root@ubuntu01:/var/log/nova# nova list
  
+--+--+++-++
  | ID   | Name | Status | Task State | Power 
State | Networks   |
  
+--+--+++-++
  | 663dc949-11f9-4aab-aaf7-6f5bd761ab6f | test | ACTIVE | None   | Running 
| test=10.10.0.5, 10.10.0.5, 10.10.0.12; test2=20.20.0.2 |
  
+--+--+++-++

  the error log in nova computer is:
   ERROR nova.openstack.common.rpc.amqp 
[req-5af0e807-521f-45a2-a329-fd61ec74779e bcac7970f8ae41f38f79e01dece39bd8 
d13fb5f6d2354320bf4767f9b71df820] Exception during message handling
   TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
   TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
   TRACE nova.openstack.common.rpc.amqp **args)
   TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/no

[Yahoo-eng-team] [Bug 1297701] [NEW] Create VM use another tenant's port, the VM can't communicate with other

2014-03-26 Thread shihanzhang
Public bug reported:

An admin user create port for another project, then use this port Create
VM, the VM can't communicate with other, because the security rule does
not work. the vm in nova can not show IP.

root@ubuntu01:/var/log/neutron# neutron port-show 
66c2d6bd-7d39-4948-b561-935cb9d264eb
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| True  
|
| allowed_address_pairs | {"ip_address": "169.254.16.253", "mac_address": 
"fa:16:3e:48:73:a7"}  |
| binding:capabilities  | {"port_filter": false}
|
| binding:host_id   |   
|
| binding:vif_type  | unbound   
|
| device_id |   
|
| device_owner  |   
|
| extra_dhcp_opts   |   
|
| fixed_ips | {"subnet_id": "5519e015-fc83-44c2-99ad-d669b3c2c9d7", 
"ip_address": "10.10.10.4"} |
| id| 66c2d6bd-7d39-4948-b561-935cb9d264eb  
|
| mac_address   | fa:16:3e:48:73:a7 
|
| name  |   
|
| network_id| 255f3e92-5a6e-44a5-bbf9-1a62bf5d5935  
|
| security_groups   | 94ad554f-392d-4dd5-8184-357f37b75111  
|
| status| DOWN  
|
| tenant_id | 3badf700bbc749ec9d9869fddc63899f  
|
+---+---+

root@ubuntu01:/var/log/neutron# keystone tenant-list
+--+-+-+
|id|   name  | enabled |
+--+-+-+
| 34fddbc22c184214b823be267837ef81 |  admin  |   True  |
| 48eb4330b6e74a9f9e74d3e191a0fa2e | service |   True  |
+--+-+-+

root@ubuntu01:/var/log/neutron# nova list
+--+---+++-+--+
| ID   | Name  | Status | Task State | Power 
State | Networks |
+--+---+++-+--+
| 5ce98599-75cb-49db-aa76-668491ee3bd0 | test3 | ACTIVE | None   | Running  
   |  |
+--+---++----+-----+--+

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297701

Title:
  Create VM use another tenant's port, the VM can't communicate with
  other

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  An admin user create port for another project, then use this port
  Create VM, the VM can't communicate with other, because the security
  rule does not work. the vm in nova can not show IP.

  root@ubuntu01:/var/log/neutron# neutron port-show 
66c2d6bd-7d39-4948-b561-935cb9d264eb
  
+---+---+
  | Field | Value   
  |
  
+---+-

[Yahoo-eng-team] [Bug 1297088] [NEW] unit test of test_delete_ports_by_device_id always failed

2014-03-24 Thread shihanzhang
Public bug reported:

I found int test_db_plugin.py, the test test_delete_ports_by_device_id
always failed, the error log is below:

INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on subnet 9579ede3-4bc4-43ea-939c-42c9ab027a53. One or more 
ports have an IP allocation from this subnet.
INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on network 5f2ec397-31c7-4e92-acda-79d6093636ba. There are 
one or more ports still in use on the network.
 }}}
 
 Traceback (most recent call last):
   File "neutron/tests/unit/test_db_plugin.py", line 1681, in 
test_delete_ports_by_device_id
 expected_code=webob.exc.HTTPOk.code)
   File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
 self.gen.throw(type, value, traceback)
   File "neutron/tests/unit/test_db_plugin.py", line 567, in subnet
 self._delete('subnets', subnet['subnet']['id'])
   File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
 self.gen.throw(type, value, traceback)
   File "neutron/tests/unit/test_db_plugin.py", line 534, in network
 self._delete('networks', network['network']['id'])
   File "neutron/tests/unit/test_db_plugin.py", line 450, in _delete
 self.assertEqual(res.status_int, expected_code)
   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 321, in assertEqual
 self.assertThat(observed, matcher, message)
   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 406, in assertThat
 raise mismatch_error
 MismatchError: 409 != 204

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297088

Title:
  unit test of test_delete_ports_by_device_id always failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I found int test_db_plugin.py, the test test_delete_ports_by_device_id
  always failed, the error log is below:

  INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on subnet 9579ede3-4bc4-43ea-939c-42c9ab027a53. One or more 
ports have an IP allocation from this subnet.
  INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on network 5f2ec397-31c7-4e92-acda-79d6093636ba. There are 
one or more ports still in use on the network.
   }}}
   
   Traceback (most recent call last):
 File "neutron/tests/unit/test_db_plugin.py", line 1681, in 
test_delete_ports_by_device_id
   expected_code=webob.exc.HTTPOk.code)
 File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
   self.gen.throw(type, value, traceback)
 File "neutron/tests/unit/test_db_plugin.py", line 567, in subnet
   self._delete('subnets', subnet['subnet']['id'])
 File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
   self.gen.throw(type, value, traceback)
 File "neutron/tests/unit/test_db_plugin.py", line 534, in network
   self._delete('networks', network['network']['id'])
 File "neutron/tests/unit/test_db_plugin.py", line 450, in _delete
   self.assertEqual(res.status_int, expected_code)
 File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 321, in assertEqual
   self.assertThat(observed, matcher, message)
 File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 406, in assertThat
   raise mismatch_error
   MismatchError: 409 != 204

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1297088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294554] [NEW] Create port ERROR in n1kv of cissco plugin

2014-03-19 Thread shihanzhang
Public bug reported:


in my unit test, create port always happen error in n1kv, my unit test in 
neutron/tests/unit/test_db_plugin.py  is:
 def test_prevent_used_dhcp_port_deletion(self):
with self.network() as network:
data = {'port': {'network_id': network['network']['id'],
 'tenant_id': 'tenant_id',
 'device_owner': constants.DEVICE_OWNER_DHCP}}
create_req = self.new_create_request('ports', data)
res = self.deserialize(self.fmt,
   create_req.get_response(self.api))
del_req = self.new_delete_request('ports', res['port']['id'])
delete_res = del_req.get_response(self.api)
self.assertEqual(delete_res.status_int,
 webob.exc.HTTPNoContent.code)

the error log is:

 Traceback (most recent call last):
   File "neutron/api/v2/resource.py", line 87, in resource
 result = method(request=request, **args)
   File "neutron/api/v2/base.py", line 419, in create
 obj = obj_creator(request.context, **kwargs)
   File "neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py", line 1188, in 
create_port
 p_profile = self._get_policy_profile_by_name(p_profile_name)
   File "neutron/plugins/cisco/db/n1kv_db_v2.py", line 1530, in 
_get_policy_profile_by_name
 filter_by(name=name).one())
   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/sqlalchemy/orm/query.py",
 line 2323, in one
 raise orm_exc.NoResultFound("No row was found for one()")
 NoResultFound: No row was found for one()

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294554

Title:
  Create port ERROR in n1kv  of cissco plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  in my unit test, create port always happen error in n1kv, my unit test in 
neutron/tests/unit/test_db_plugin.py  is:
   def test_prevent_used_dhcp_port_deletion(self):
  with self.network() as network:
  data = {'port': {'network_id': network['network']['id'],
   'tenant_id': 'tenant_id',
   'device_owner': constants.DEVICE_OWNER_DHCP}}
  create_req = self.new_create_request('ports', data)
  res = self.deserialize(self.fmt,
 create_req.get_response(self.api))
  del_req = self.new_delete_request('ports', res['port']['id'])
  delete_res = del_req.get_response(self.api)
  self.assertEqual(delete_res.status_int,
   webob.exc.HTTPNoContent.code)

  the error log is:

   Traceback (most recent call last):
 File "neutron/api/v2/resource.py", line 87, in resource
   result = method(request=request, **args)
 File "neutron/api/v2/base.py", line 419, in create
   obj = obj_creator(request.context, **kwargs)
 File "neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py", line 1188, in 
create_port
   p_profile = self._get_policy_profile_by_name(p_profile_name)
 File "neutron/plugins/cisco/db/n1kv_db_v2.py", line 1530, in 
_get_policy_profile_by_name
   filter_by(name=name).one())
 File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/sqlalchemy/orm/query.py",
 line 2323, in one
   raise orm_exc.NoResultFound("No row was found for one()")
   NoResultFound: No row was found for one()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262563] Re: admin vm associate floating ip failed

2014-03-14 Thread shihanzhang
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262563

Title:
  admin vm associate floating ip failed

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Version: Stable/Havana
  When an admin user in one tenant creates a VM in a network belongin different 
tenant,then admin  associating floating ip to that VM  will fail,because that 
tenant' network can't connect to router which belongs to the tenant admin is in.
  the fail log in neutron is below:

  Traceback (most recent call last):
 File "/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 
84, in resource
   result = method(request=request, **args)
 File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 486, 
in update
   obj = obj_updater(request.context, id, **kwargs)
 File "/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 661, in 
update_floatingip
   context.elevated(), fip_port_id))
 File "/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 582, in 
_update_fip_assoc
   floatingip_db['floating_network_id'])
 File "/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 556, in 
get_assoc_data
   floating_network_id)
 File "/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 492, in 
_get_router_for_floatingip
   port_id=internal_port['id'])
   ExternalGatewayForFloatingIPNotFound: External network 
c2ce01dc-e59e-463b-a860-b0ea69ef3535 is not reachable from subnet 
6229560c-6ebf-40d4-a478-d67864360647.  Therefore, cannot associate Port 
5b816263-0f96-4545-8b37-e4b4ced687af with a Floating IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237282] Re: vm fails to start when compute nodes suddenly powered off

2014-03-14 Thread shihanzhang
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237282

Title:
  vm fails to start when compute nodes suddenly powered off

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  My environment is built by devstack based on ubuntu 12.04

  Compute nodes suddenly powered off, the  vm on the compute nodes fail to 
start.
  I get this error after executing:
  root@controller:~# nova show cicd-slave-04
  +-+
  | Property| Value 
   
  +-+
  | status  | SHUTOFF   
 
  | updated | 2013-10-09T06:38:06Z  
   
  | OS-EXT-STS:task_state   | None  
  
  | OS-EXT-SRV-ATTR:host| compute2

  root@controller:~# nova start  cicd-slave-04

  I check the nova-compute node then I have this trace in the logs:

  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 430, 
in _process_data
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 133, in dispatch
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp 
temp_level, payload)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 94, in wrapped
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 209, in 
decorated_function
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 195, in 
decorated_function
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 260, in 
decorated_function
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 237, in 
decorated_function
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 224, in 
decorated_function
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1415, in 
start_instance
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp 
self.driver.power_on(instance)
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1407, in 
power_on
  2013-10-09 16:26:43.965 32682 TRACE nova.openstack.common.rpc.amqp 
self._create_domain(domain

[Yahoo-eng-team] [Bug 1273154] Re: Associate floatingip failed

2014-03-14 Thread shihanzhang
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273154

Title:
  Associate floatingip failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I create a port in external network, then associate floatingip to this port, 
it failed with bellow log:
  404-{u'NeutronError': {u'message': u'External network 
cba5aafb-6dc8-4139-b88e-f057d9f1b7ac is not reachable from subnet 
8989151d-c191-4598-bebe-115343bc513f.  Therefore, cannot associate Port 
a711d8bf-e71c-4ac5-8298-247913205494 with a Floating IP.', u'type': 
u'ExternalGatewayForFloatingIPNotFound', u'detail': u''}}

  the detail information is:
  neutron port-show a711d8bf-e71c-4ac5-8298-247913205494
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:capabilities  | {"port_filter": false}  
  |
  | binding:host_id   | 
  |
  | binding:vif_type  | unbound 
  |
  | device_id | 
  |
  | device_owner  | 
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | {"subnet_id": 
"8989151d-c191-4598-bebe-115343bc513f", "ip_address": "172.24.4.3"} |
  | id| a711d8bf-e71c-4ac5-8298-247913205494
  |
  | mac_address   | fa:16:3e:f3:99:34   
  |
  | name  | 
  |
  | network_id| cba5aafb-6dc8-4139-b88e-f057d9f1b7ac
  |
  | security_groups   | 806ec29e-c2bf-4bbe-a7e8-9a73f5af03f9
  |
  | status| DOWN
  |
  | tenant_id | 08fa6853d168413a9698a1870a8abfa3
  |
  
+---+---+

  neutron  floatingip-show 67cf212b-ecad-4e42-8806-d84d8f1c6ecb
  +-+--+
  | Field   | Value|
  +-+--+
  | fixed_ip_address|  |
  | floating_ip_address | 172.24.4.4   |
  | floating_network_id | cba5aafb-6dc8-4139-b88e-f057d9f1b7ac |
  | id  | 67cf212b-ecad-4e42-8806-d84d8f1c6ecb |
  | port_id |  |
  | router_id   |  |
  | tenant_id   | 08fa6853d168413a9698a1870a8abfa3 |
  +-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1273154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291336] Re: Unused parameter in attributes.py

2014-03-12 Thread shihanzhang
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291336

Title:
  Unused parameter in attributes.py

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  there are many method which hava unuse parameter, for exaple:
  def _validate_uuid_or_none(data, valid_values=None):
  if data is not None:
  return _validate_uuid(data)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291336] [NEW] Unused parameter in attributes.py

2014-03-12 Thread shihanzhang
Public bug reported:

there are many method which hava unuse parameter, for exaple:
def _validate_uuid_or_none(data, valid_values=None):
if data is not None:
return _validate_uuid(data)

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291336

Title:
  Unused parameter in attributes.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  there are many method which hava unuse parameter, for exaple:
  def _validate_uuid_or_none(data, valid_values=None):
  if data is not None:
  return _validate_uuid(data)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291163] Re: Create VM use port' mac-address '123' failed

2014-03-11 Thread shihanzhang
** Summary changed:

- Create port done not check validity  of mac-address
+ Create VM use port' mac-address '123' failed

** Description changed:

- the api of 'create port' does not check validity  of mac-address, if you use 
this invalid port to create VM , it will failed,
+ neutron api of "create port" can use mac of int value,but nova can't use the 
port to create vm!
  root@ubuntu01:~# neutron port-create  --mac-address 123 test2
  Created a new port:
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:capabilities  | {"port_filter": false}  
  |
  | binding:host_id   | 
  |
  | binding:vif_type  | unbound 
  |
  | device_id | 
  |
  | device_owner  | 
  |
  | fixed_ips | {"subnet_id": 
"5519e015-fc83-44c2-99ad-d669b3c2c9d7", "ip_address": "10.10.10.4"} |
  | id| ae33af6e-6f8f-4ce8-928b-4f05396a7db3
  |
  | mac_address   | 123 
  |
  | name  | 
  |
  | network_id| 255f3e92-5a6e-44a5-bbf9-1a62bf5d5935
  |
  | security_groups   | f627556d-64a3-4c1b-8c50-10a58ddaf29f
  |
  | status| DOWN
  |
  | tenant_id | 34fddbc22c184214b823be267837ef81
  |
  
  the failed log of creating VM:
  
-  Traceback (most recent call last):
-File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
1037, in _build_instance
-  set_access_ip=set_access_ip)
-File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
1420, in _spawn
-  LOG.exception(_('Instance failed to spawn'), instance=instance)
-File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
1417, in _spawn
-  block_device_info)
-File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2070, in spawn
-  block_device_info, context=context)
-File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3225, in _create_domain_and_network
-  domain = self._create_domain(xml, instance=instance, power_on=power_on)
-File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3159, in _create_domain
-  raise e
-  libvirtError: XML error: unable to parse mac address '123'
+  Traceback (most recent call last):
+    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
1037, in _build_instance
+  set_access_ip=set_access_ip)
+    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
1420, in _spawn
+  LOG.exception(_('Instance failed to spawn'), instance=instance)
+    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
1417, in _spawn
+  block_device_info)
+    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2070, in spawn
+  block_device_info, context=context)
+    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3225, in _create_domain_and_network
+  domain = self._create_domain(xml, instance=instance, power_on=power_on)
+    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3159, in _create_domain
+  raise e
+  libvirtError: XML error: unable to parse mac address '123'

** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291163

Title:
  Create VM use port' mac-address '123' failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  neutron api of "create port" can use mac of int value,but nova can't use the 
port to create vm!
  root@ubuntu01:~# neutron port-create  --mac-address 123 test2
  Created a new port:
  
+

[Yahoo-eng-team] [Bug 1291163] [NEW] Create port done not check validity of mac-address

2014-03-11 Thread shihanzhang
Public bug reported:

the api of 'create port' does not check validity  of mac-address, if you use 
this invalid port to create VM , it will failed,
root@ubuntu01:~# neutron port-create  --mac-address 123 test2
Created a new port:
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| True  
|
| allowed_address_pairs |   
|
| binding:capabilities  | {"port_filter": false}
|
| binding:host_id   |   
|
| binding:vif_type  | unbound   
|
| device_id |   
|
| device_owner  |   
|
| fixed_ips | {"subnet_id": "5519e015-fc83-44c2-99ad-d669b3c2c9d7", 
"ip_address": "10.10.10.4"} |
| id| ae33af6e-6f8f-4ce8-928b-4f05396a7db3  
|
| mac_address   | 123   
|
| name  |   
|
| network_id| 255f3e92-5a6e-44a5-bbf9-1a62bf5d5935  
|
| security_groups   | f627556d-64a3-4c1b-8c50-10a58ddaf29f  
|
| status| DOWN  
|
| tenant_id | 34fddbc22c184214b823be267837ef81  
|

the failed log of creating VM:

 Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1037, 
in _build_instance
 set_access_ip=set_access_ip)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1420, 
in _spawn
 LOG.exception(_('Instance failed to spawn'), instance=instance)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1417, 
in _spawn
 block_device_info)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2070, in spawn
 block_device_info, context=context)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3225, in _create_domain_and_network
 domain = self._create_domain(xml, instance=instance, power_on=power_on)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3159, in _create_domain
 raise e
 libvirtError: XML error: unable to parse mac address '123'

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291163

Title:
  Create port done not check validity  of mac-address

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the api of 'create port' does not check validity  of mac-address, if you use 
this invalid port to create VM , it will failed,
  root@ubuntu01:~# neutron port-create  --mac-address 123 test2
  Created a new port:
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:capabilities  | {"port_filter": false}  
  |
  | binding:host_id   | 
  |
  | binding:vif_type  | unbound

[Yahoo-eng-team] [Bug 1291156] [NEW] Create security rulle doen't check remote_ip_prefix

2014-03-11 Thread shihanzhang
Public bug reported:

the API of 'security-group-rule-create' doesn't check the parameter of
'remote_ip_prefix', if you use a invalid a value, it alse success,

root@ubuntu01:~# neutron  security-group-rule-create  --direction ingress 
--ethertype IPv4 --protocol tcp --port-range-min 20 --port-range-max 30 
--remote-ip-prefix 192.168.1/24 shz 
Created a new security_group_rule:
+---+--+
| Field | Value|
+---+--+
| direction | ingress  |
| ethertype | IPv4 |
| id| 9087fd12-82b4-491c-a5c6-5c7acf251f4c |
| port_range_max| 30   |
| port_range_min| 20   |
| protocol  | tcp  |
| remote_group_id   |  |
| remote_ip_prefix  | 192.168.1/24 |
| security_group_id | e4a37547-c2d8-4ef6-8273-bc3253d7600a |
| tenant_id | 34fddbc22c184214b823be267837ef81 |
+---+--+

** Affects: neutron
 Importance: Undecided
     Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291156

Title:
  Create security rulle doen't check remote_ip_prefix

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the API of 'security-group-rule-create' doesn't check the parameter of
  'remote_ip_prefix', if you use a invalid a value, it alse success,

  root@ubuntu01:~# neutron  security-group-rule-create  --direction ingress 
--ethertype IPv4 --protocol tcp --port-range-min 20 --port-range-max 30 
--remote-ip-prefix 192.168.1/24 shz 
  Created a new security_group_rule:
  +---+--+
  | Field | Value|
  +---+--+
  | direction | ingress  |
  | ethertype | IPv4 |
  | id| 9087fd12-82b4-491c-a5c6-5c7acf251f4c |
  | port_range_max| 30   |
  | port_range_min| 20   |
  | protocol  | tcp  |
  | remote_group_id   |  |
  | remote_ip_prefix  | 192.168.1/24 |
  | security_group_id | e4a37547-c2d8-4ef6-8273-bc3253d7600a |
  | tenant_id | 34fddbc22c184214b823be267837ef81 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >