[Yahoo-eng-team] [Bug 1767811] [NEW] [DVR] br-int in compute node will send unknown unicast to sg-xxx

2018-04-29 Thread Hong Hui Xiao
Public bug reported:

Environment:

Version: Ocata/Stable
Nodes: Two nodes. One node with controller services and network 
services(dvr_snat), the other node with compute service and network 
services(dvr)


Setups to reproduce:

1. Create networks and DVR and connect them, enable snat.
2. Boot one VM in compute node
3. Ping 8.8.8.8 inside the VM
4. tcpdump the tap device of VM

Observation:

$ sudo tcpdump -nei tap8b25d590-09
fa:16:3e:63:0c:57 > fa:16:3e:c8:7a:67, ethertype IPv4 (0x0800), length 98: 
10.0.0.6 > 8.8.8.8: ICMP echo request, id 22273, seq 343, length 64
fa:16:3e:c8:7a:67 > fa:16:3e:ba:67:74, ethertype IPv4 (0x0800), length 98: 
10.0.0.6 > 8.8.8.8: ICMP echo request, id 22273, seq 343, length 64
fa:16:3e:ba:67:74 > fa:16:3e:63:0c:57, ethertype IPv4 (0x0800), length 98: 
8.8.8.8 > 10.0.0.6: ICMP echo reply, id 22273, seq 343, length 64

Relationship between IP address and MAC address:

VM   10.0.0.6 fa:16:3e:63:0c:57
qr-xxx   10.0.0.1 fa:16:3e:c8:7a:67
sg-xxx   10.0.0.8 fa:16:3e:ba:67:74

Error:

VM should not capture "fa:16:3e:c8:7a:67 > fa:16:3e:ba:67:74",
because it should be an unicast from qr-xxx to sg-xxx. It appears that
in br-int, there is no fdb record for fa:16:3e:ba:67:74, so br-int will
flood frames destined to "fa:16:3e:ba:67:74" to every port in the same
local VLAN. So, VM can capture this unknown unicast.

Since every device in the same local VLAN on the same br-int can
capture the flooded unknown unicast, it will have impact on performance
and security.

Expect:

"qr-xxx to sg-xxx" should mainly be unicast.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1767811

Title:
  [DVR] br-int in compute node will send unknown unicast to sg-xxx

Status in neutron:
  New

Bug description:
  Environment:

  Version: Ocata/Stable
  Nodes: Two nodes. One node with controller services and network 
services(dvr_snat), the other node with compute service and network 
services(dvr)

  
  Setups to reproduce:

  1. Create networks and DVR and connect them, enable snat.
  2. Boot one VM in compute node
  3. Ping 8.8.8.8 inside the VM
  4. tcpdump the tap device of VM

  Observation:

  $ sudo tcpdump -nei tap8b25d590-09
  fa:16:3e:63:0c:57 > fa:16:3e:c8:7a:67, ethertype IPv4 (0x0800), length 
98: 10.0.0.6 > 8.8.8.8: ICMP echo request, id 22273, seq 343, length 64
  fa:16:3e:c8:7a:67 > fa:16:3e:ba:67:74, ethertype IPv4 (0x0800), length 
98: 10.0.0.6 > 8.8.8.8: ICMP echo request, id 22273, seq 343, length 64
  fa:16:3e:ba:67:74 > fa:16:3e:63:0c:57, ethertype IPv4 (0x0800), length 
98: 8.8.8.8 > 10.0.0.6: ICMP echo reply, id 22273, seq 343, length 64

  Relationship between IP address and MAC address:

  VM   10.0.0.6 fa:16:3e:63:0c:57
  qr-xxx   10.0.0.1 fa:16:3e:c8:7a:67
  sg-xxx   10.0.0.8 fa:16:3e:ba:67:74

  Error:

  VM should not capture "fa:16:3e:c8:7a:67 > fa:16:3e:ba:67:74",
  because it should be an unicast from qr-xxx to sg-xxx. It appears that
  in br-int, there is no fdb record for fa:16:3e:ba:67:74, so br-int
  will flood frames destined to "fa:16:3e:ba:67:74" to every port in the
  same local VLAN. So, VM can capture this unknown unicast.

  Since every device in the same local VLAN on the same br-int can
  capture the flooded unknown unicast, it will have impact on
  performance and security.

  Expect:

  "qr-xxx to sg-xxx" should mainly be unicast.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1767811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666749] [NEW] Update subnet will not bump network revision sometime

2017-02-21 Thread Hong Hui Xiao
Public bug reported:

dragonflow replies on neutron revision to judge if an object has
updates. So dragonflow has a series of test cases to verify that neutron
revision works well.

After a recent change[1], an intermittent failure with high rate can be
observed in dragonflow jekins job [2]. The test will update name of
subnet and then verify that the revision of network increases.

This issue can't be reproduced from user interface. So, I create 2 UT[3]
to verify the issue. It looks like with a new context object, the issue
will not reproduce.


[1] https://review.openstack.org/#/c/435748
[2] 
http://logs.openstack.org/08/435208/1/check/gate-dragonflow-python35/d6ca0d0/testr_results.html.gz
[3]

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666749

Title:
  Update subnet will not bump network revision sometime

Status in neutron:
  New

Bug description:
  dragonflow replies on neutron revision to judge if an object has
  updates. So dragonflow has a series of test cases to verify that
  neutron revision works well.

  After a recent change[1], an intermittent failure with high rate can
  be observed in dragonflow jekins job [2]. The test will update name of
  subnet and then verify that the revision of network increases.

  This issue can't be reproduced from user interface. So, I create 2
  UT[3] to verify the issue. It looks like with a new context object,
  the issue will not reproduce.

  
  [1] https://review.openstack.org/#/c/435748
  [2] 
http://logs.openstack.org/08/435208/1/check/gate-dragonflow-python35/d6ca0d0/testr_results.html.gz
  [3]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649503] [NEW] Mechanism driver can't be notified with updated network

2016-12-13 Thread Hong Hui Xiao
Public bug reported:

When disassociate qos with network, the ml2 mechanism drivers will still
be notified that the network has the stale qos policy.

This bug can be observed after cd7d63bde92e47a4b7bd4212b2e6c45f08c03143

The same issue will not happen for port.

neutron --debug net-update private --no-qos-policy

DEBUG: keystoneauth.session REQ: curl -g -i -X PUT 
http://192.168.31.90:9696/v2.0/networks/60e7627a-1722-439d-90d4-975fd431df7c.json
 -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}db3122bb702d9094793c5235c47f7b1e544315b2" -d '{"network": 
{"qos_policy_id": null}}'
DEBUG: keystoneauth.session RESP: [200] Content-Type: application/json 
Content-Length: 802 X-Openstack-Request-Id: 
req-7b551082-c2d3-452c-b58a-da9884b24d42 Date: Tue, 13 Dec 2016 08:05:35 GMT 
Connection: keep-alive 
RESP BODY: {"network": {"provider:physical_network": null, 
"ipv6_address_scope": null, "revision_number": 11, "port_security_enabled": 
true, "mtu": 1450, "id": "60e7627a-1722-439d-90d4-975fd431df7c", 
"router:external": false, "availability_zone_hints": [], "availability_zones": 
[], "provider:segmentation_id": 77, "ipv4_address_scope": null, "shared": 
false, "project_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "status": "ACTIVE", 
"subnets": ["230bcb4f-8c2b-4db2-a9aa-325351cd6064", 
"09aa4e9c-fe6b-42d5-b5ca-76443a6c380a"], "description": "", "tags": [], 
"updated_at": "2016-12-13T08:05:34Z", "qos_policy_id": 
"6cd40fa9-092f-43bb-8214-ed79e5174c4f", "name": "private", "admin_state_up": 
true, "tenant_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "created_at": 
"2016-12-13T01:36:43Z", "provider:network_type": "vxlan"}}

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649503

Title:
  Mechanism driver can't be notified with updated network

Status in neutron:
  New

Bug description:
  When disassociate qos with network, the ml2 mechanism drivers will
  still be notified that the network has the stale qos policy.

  This bug can be observed after
  cd7d63bde92e47a4b7bd4212b2e6c45f08c03143

  The same issue will not happen for port.

  neutron --debug net-update private --no-qos-policy

  DEBUG: keystoneauth.session REQ: curl -g -i -X PUT 
http://192.168.31.90:9696/v2.0/networks/60e7627a-1722-439d-90d4-975fd431df7c.json
 -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}db3122bb702d9094793c5235c47f7b1e544315b2" -d '{"network": 
{"qos_policy_id": null}}'
  DEBUG: keystoneauth.session RESP: [200] Content-Type: application/json 
Content-Length: 802 X-Openstack-Request-Id: 
req-7b551082-c2d3-452c-b58a-da9884b24d42 Date: Tue, 13 Dec 2016 08:05:35 GMT 
Connection: keep-alive 
  RESP BODY: {"network": {"provider:physical_network": null, 
"ipv6_address_scope": null, "revision_number": 11, "port_security_enabled": 
true, "mtu": 1450, "id": "60e7627a-1722-439d-90d4-975fd431df7c", 
"router:external": false, "availability_zone_hints": [], "availability_zones": 
[], "provider:segmentation_id": 77, "ipv4_address_scope": null, "shared": 
false, "project_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "status": "ACTIVE", 
"subnets": ["230bcb4f-8c2b-4db2-a9aa-325351cd6064", 
"09aa4e9c-fe6b-42d5-b5ca-76443a6c380a"], "description": "", "tags": [], 
"updated_at": "2016-12-13T08:05:34Z", "qos_policy_id": 
"6cd40fa9-092f-43bb-8214-ed79e5174c4f", "name": "private", "admin_state_up": 
true, "tenant_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "created_at": 
"2016-12-13T01:36:43Z", "provider:network_type": "vxlan"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649488] [NEW] Duplicated revises_on_change in qos models

2016-12-12 Thread Hong Hui Xiao
Public bug reported:

Both 0e51574b2fb299eb42d6f5333e68f70244b08d50 and
3b610a1debdfb99def758406b1604aa3273edeea add revises_on_change to qos db
models, which cause duplication in

https://github.com/openstack/neutron/blob/09bc8a724e42fed0f527b56d38c5720167031764/neutron/db/qos/models.py#L49-L75

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649488

Title:
  Duplicated revises_on_change in qos models

Status in neutron:
  New

Bug description:
  Both 0e51574b2fb299eb42d6f5333e68f70244b08d50 and
  3b610a1debdfb99def758406b1604aa3273edeea add revises_on_change to qos
  db models, which cause duplication in

  
https://github.com/openstack/neutron/blob/09bc8a724e42fed0f527b56d38c5720167031764/neutron/db/qos/models.py#L49-L75

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642476] [NEW] Associating qos can't bump port/network revision number

2016-11-16 Thread Hong Hui Xiao
Public bug reported:

When associate/disassociate qos with port/network, the port/network's
revision number will not change.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642476

Title:
  Associating qos can't bump port/network revision number

Status in neutron:
  New

Bug description:
  When associate/disassociate qos with port/network, the port/network's
  revision number will not change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1642476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2016-10-23 Thread Hong Hui Xiao
** Also affects: dragonflow
   Importance: Undecided
   Status: New

** Changed in: dragonflow
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in craton:
  Fix Released
Status in DragonFlow:
  In Progress
Status in ec2-api:
  In Progress
Status in gce-api:
  Fix Released
Status in Karbor:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Kosmos:
  Fix Released
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Confirmed
Status in octavia:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress
Status in Solum:
  Fix Released
Status in Swift Authentication:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/craton/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625604] [NEW] The timeout is not honored in NeutronOVSDBTransaction

2016-09-20 Thread Hong Hui Xiao
Public bug reported:

According to [1], the usage of [2] is not appropriate. If timeout > 0,
the commit will wait forever.


[1] https://docs.python.org/2/library/queue.html#module-Queue
[2] 
https://github.com/openstack/neutron/blob/be29217d82cc633bda1a66c2e50612de1e3f7e15/neutron/agent/ovsdb/impl_idl.py#L67

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625604

Title:
  The timeout is not honored  in NeutronOVSDBTransaction

Status in neutron:
  New

Bug description:
  According to [1], the usage of [2] is not appropriate. If timeout > 0,
  the commit will wait forever.

  
  [1] https://docs.python.org/2/library/queue.html#module-Queue
  [2] 
https://github.com/openstack/neutron/blob/be29217d82cc633bda1a66c2e50612de1e3f7e15/neutron/agent/ovsdb/impl_idl.py#L67

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621720] [NEW] Unify the code path of creating segment

2016-09-08 Thread Hong Hui Xiao
Public bug reported:

Before routed network, functions in [1] are used to manage network
segment. Routed network introduces a plugin [2] to manage network
segment. So, now there are 2 code paths to do the same job.

This cause issue when create network in ml2. When the network is
created, related segmentation_id will be reserved in ml2 and related
segments will be created by using [1]. In [1], PRECOMMIT_CREATE event
for SEGMENT will be sent out.

In patch [3], a subscriber was added in ml2 to subscribe the
PRECOMMIT_CREATE event of SEGMENT. The subscriber will reserve the
segmentation_id.

But the segmentation_id has already been reserved. A workaround was
added at [4] to avoid issue.

Ideally, ml2 should use [2] to create segment and let [2] to do other
things(like reserve segmentation_id). This can eliminate the workaround.


[1] neutron.db.segments_db
[2] neutron.services.segments.plugin
[3] f564dcad4d8c072767ae235353a982653b156c76
[4] 
https://github.com/openstack/neutron/blob/8dffc238759cf543681443a2a4540dd0d569da6a/neutron/plugins/ml2/plugin.py#L1814-L1821

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621720

Title:
  Unify the code path of creating segment

Status in neutron:
  New

Bug description:
  Before routed network, functions in [1] are used to manage network
  segment. Routed network introduces a plugin [2] to manage network
  segment. So, now there are 2 code paths to do the same job.

  This cause issue when create network in ml2. When the network is
  created, related segmentation_id will be reserved in ml2 and related
  segments will be created by using [1]. In [1], PRECOMMIT_CREATE event
  for SEGMENT will be sent out.

  In patch [3], a subscriber was added in ml2 to subscribe the
  PRECOMMIT_CREATE event of SEGMENT. The subscriber will reserve the
  segmentation_id.

  But the segmentation_id has already been reserved. A workaround was
  added at [4] to avoid issue.

  Ideally, ml2 should use [2] to create segment and let [2] to do other
  things(like reserve segmentation_id). This can eliminate the
  workaround.

  
  [1] neutron.db.segments_db
  [2] neutron.services.segments.plugin
  [3] f564dcad4d8c072767ae235353a982653b156c76
  [4] 
https://github.com/openstack/neutron/blob/8dffc238759cf543681443a2a4540dd0d569da6a/neutron/plugins/ml2/plugin.py#L1814-L1821

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621717] [NEW] Delete agent will not delete related SegmentHostMapping

2016-09-08 Thread Hong Hui Xiao
Public bug reported:

SegmentHostMapping will record the relationship between segments and
hosts. If admin delete an agent, related SegmentHostMapping should be
cleared too. Or else, other logic which leverage SegmentHostMapping
will still think the host is available. This will cause error is admin
just want to remove a node from openstack topology.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: routed-network

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: routed-network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621717

Title:
  Delete agent will not delete related SegmentHostMapping

Status in neutron:
  New

Bug description:
  SegmentHostMapping will record the relationship between segments and
  hosts. If admin delete an agent, related SegmentHostMapping should be
  cleared too. Or else, other logic which leverage SegmentHostMapping
  will still think the host is available. This will cause error is admin
  just want to remove a node from openstack topology.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621698] [NEW] AFTER_DELETE event for SECURITY_GROUP_RULE should contain sg_id

2016-09-08 Thread Hong Hui Xiao
Public bug reported:

In the notification of AFTER_DELETE event for SECURITY_GROUP_RULE, the
security group rule is actually deleted from DB. There is no way for
subscriber to know the latest information of related security group.

To be specific, dragonflow will maintain the security group version, and
we are using revision_number in dragonflow now. The sg_rule_delete will
bump the sg revision, which will only happen after the db transaction.
dragonflow stores security group rule as part of security group. So, in
the AFTER_DELETE event of SECURITY_GROUP_RULE, we don't know which
security group is updated, if neutron don't pass the security group id.

We can query all security group and iterate their security group rules.
But that is inefficient and it will be nice if neutron just pass the
related security group id.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621698

Title:
  AFTER_DELETE event for SECURITY_GROUP_RULE should contain sg_id

Status in neutron:
  New

Bug description:
  In the notification of AFTER_DELETE event for SECURITY_GROUP_RULE, the
  security group rule is actually deleted from DB. There is no way for
  subscriber to know the latest information of related security group.

  To be specific, dragonflow will maintain the security group version,
  and we are using revision_number in dragonflow now. The sg_rule_delete
  will bump the sg revision, which will only happen after the db
  transaction. dragonflow stores security group rule as part of security
  group. So, in the AFTER_DELETE event of SECURITY_GROUP_RULE, we don't
  know which security group is updated, if neutron don't pass the
  security group id.

  We can query all security group and iterate their security group
  rules. But that is inefficient and it will be nice if neutron just
  pass the related security group id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611308] [NEW] L2pop add_fdb_entries concurrency issue

2016-08-09 Thread Hong Hui Xiao
Public bug reported:

This is observed during live migration in a large scale env. ovs-
agent+l2pop is used in the env.

The oberserved issue is:
If multiple vms live migrates at the same time, some host will have stale 
unicast information at table 20, which still points vm to the old host.

After checking the code,  there is a potential issue for [1], when
concurrent call to it.

Assuming there is 3 hosts, A, B, C. The VMs are being migrate from A to
B and C. The VMs are in the same neutron network. and host B don't have
any port of that neutron network before the migration.

The scenario might be:
1) VM1 migrates from host A to host B.
2) When the port of VM1 is up in host B, neutron server will be informed, and 
all the fdb_entries of that neutron network will be generated and sent to host 
B. The code at [2] will be hit. Let's assume the neutron network has lots of 
ports in it. So, the call at [2] is expected to take long time.
3) In the middle of 2), another VM, called VM 2 migrate from host A to host C.
4) Let's assume host C already has ports in the neutron network of VM2. So, the 
code will not hit [2], and just go to [3]. [3] is a lightweight fanout rpc 
request. ovs-agent at host B might get this request when still processing 2).
5) 4) finished, but 2) is still ongoing.

At this point, host B will have the new unicast information of VM2.
However, the information at 2) contains stale information, which still
thinks VM2 is at host A.

6) When 2) finished, the stale information of VM2 might cover the new
information of VM2, which lead to the reported issue.


[1] 
https://github.com/openstack/neutron/blob/fd401fe0a052a7103cb19d7385a1c702de05577f/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L38
[2] 
https://github.com/openstack/neutron/blob/fd401fe0a052a7103cb19d7385a1c702de05577f/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L240
[3] 
https://github.com/openstack/neutron/blob/fd401fe0a052a7103cb19d7385a1c702de05577f/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L247

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611308

Title:
  L2pop add_fdb_entries concurrency issue

Status in neutron:
  New

Bug description:
  This is observed during live migration in a large scale env. ovs-
  agent+l2pop is used in the env.

  The oberserved issue is:
  If multiple vms live migrates at the same time, some host will have stale 
unicast information at table 20, which still points vm to the old host.

  After checking the code,  there is a potential issue for [1], when
  concurrent call to it.

  Assuming there is 3 hosts, A, B, C. The VMs are being migrate from A
  to B and C. The VMs are in the same neutron network. and host B don't
  have any port of that neutron network before the migration.

  The scenario might be:
  1) VM1 migrates from host A to host B.
  2) When the port of VM1 is up in host B, neutron server will be informed, and 
all the fdb_entries of that neutron network will be generated and sent to host 
B. The code at [2] will be hit. Let's assume the neutron network has lots of 
ports in it. So, the call at [2] is expected to take long time.
  3) In the middle of 2), another VM, called VM 2 migrate from host A to host C.
  4) Let's assume host C already has ports in the neutron network of VM2. So, 
the code will not hit [2], and just go to [3]. [3] is a lightweight fanout rpc 
request. ovs-agent at host B might get this request when still processing 2).
  5) 4) finished, but 2) is still ongoing.

  At this point, host B will have the new unicast information of VM2.
  However, the information at 2) contains stale information, which still
  thinks VM2 is at host A.

  6) When 2) finished, the stale information of VM2 might cover the new
  information of VM2, which lead to the reported issue.

  
  [1] 
https://github.com/openstack/neutron/blob/fd401fe0a052a7103cb19d7385a1c702de05577f/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L38
  [2] 
https://github.com/openstack/neutron/blob/fd401fe0a052a7103cb19d7385a1c702de05577f/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L240
  [3] 
https://github.com/openstack/neutron/blob/fd401fe0a052a7103cb19d7385a1c702de05577f/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L247

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594711] [NEW] Remove the deprecated config "router_id"

2016-06-21 Thread Hong Hui Xiao
Public bug reported:

This option is deprecated at https://review.openstack.org/#/c/248498/ at
M release and can be removed in N release now.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Description changed:

- This option is removed at https://review.openstack.org/#/c/248498/ at M
- release and can be removed in N release now.
+ This option is deprecated at https://review.openstack.org/#/c/248498/ at
+ M release and can be removed in N release now.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594711

Title:
  Remove the deprecated config "router_id"

Status in neutron:
  New

Bug description:
  This option is deprecated at https://review.openstack.org/#/c/248498/
  at M release and can be removed in N release now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593788] [NEW] Without using AZ aware Scheduler, dhcp can recognize AZ, while l3 can't

2016-06-17 Thread Hong Hui Xiao
Public bug reported:

I have a env with 3 network nodes.

The dhcp scheduler is configured as:
network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.WeightScheduler
The l3 scheduler is configured as:
router_scheduler_driver = 
neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler

I create 51 legacy routers use the following command:

neutron router-create router${i} --availability-zone-hint nova2
neutron router-gateway-set router${i} public

After router creation, check the routers in L3 agent, the resut is recorded at:
http://paste.openstack.org/show/516896/
The routers are spawned evenly in the 3 L3 agents.


I create 51 network use the following command:

neutron net-create net${i} --availability-zone-hint nova2
neutron subnet-create net${i} ${i}.0.0.0/24

After network creation, check the networks in dhcp agent, the result is 
recorded at:
http://paste.openstack.org/show/516897/
The networks are only spawned in nova2.


Expected result:
DHCP and L3 should act the same. I would prefer to let l3 be AZ aware even if 
the AZ scheduler is not used. The AZ aware scheduler will share load among AZs. 
For normal scheduler, the AZ should work as constraint for scheduling.

This might be fixed during the work at bug 1509046

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593788

Title:
  Without using AZ aware Scheduler, dhcp can recognize AZ, while l3
  can't

Status in neutron:
  New

Bug description:
  I have a env with 3 network nodes.

  The dhcp scheduler is configured as:
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.WeightScheduler
  The l3 scheduler is configured as:
  router_scheduler_driver = 
neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler

  I create 51 legacy routers use the following command:

  neutron router-create router${i} --availability-zone-hint nova2
  neutron router-gateway-set router${i} public

  After router creation, check the routers in L3 agent, the resut is recorded 
at:
  http://paste.openstack.org/show/516896/
  The routers are spawned evenly in the 3 L3 agents.

  
  I create 51 network use the following command:

  neutron net-create net${i} --availability-zone-hint nova2
  neutron subnet-create net${i} ${i}.0.0.0/24

  After network creation, check the networks in dhcp agent, the result is 
recorded at:
  http://paste.openstack.org/show/516897/
  The networks are only spawned in nova2.

  
  Expected result:
  DHCP and L3 should act the same. I would prefer to let l3 be AZ aware even if 
the AZ scheduler is not used. The AZ aware scheduler will share load among AZs. 
For normal scheduler, the AZ should work as constraint for scheduling.

  This might be fixed during the work at bug 1509046

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593770] [NEW] Remove the deprecated quota driver "ConfDriver"

2016-06-17 Thread Hong Hui Xiao
Public bug reported:

The ConfDriver is deprecated since Liberty [1][2], it should be removed
in Newton now.

[1] https://bugs.launchpad.net/neutron/+bug/1430523
[2] https://review.openstack.org/#/c/179543/

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593770

Title:
  Remove the deprecated quota driver "ConfDriver"

Status in neutron:
  New

Bug description:
  The ConfDriver is deprecated since Liberty [1][2], it should be
  removed in Newton now.

  [1] https://bugs.launchpad.net/neutron/+bug/1430523
  [2] https://review.openstack.org/#/c/179543/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593772] [NEW] Remove the deprecated config "quota_items"

2016-06-17 Thread Hong Hui Xiao
Public bug reported:

The quota_items configuration was deprecated in Liberty[1][2], it should
be removed in Newton.

[1] https://bugs.launchpad.net/neutron/+bug/1453322
[2] https://review.openstack.org/#/c/181593/

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593772

Title:
  Remove the deprecated config "quota_items"

Status in neutron:
  New

Bug description:
  The quota_items configuration was deprecated in Liberty[1][2], it
  should be removed in Newton.

  [1] https://bugs.launchpad.net/neutron/+bug/1453322
  [2] https://review.openstack.org/#/c/181593/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592463] [NEW] Avoid removing SegmentHostMapping in other host when update agent

2016-06-14 Thread Hong Hui Xiao
Public bug reported:

Found this when working on OVN, but it should also apply to topology
with l2 agent.

Steps to reproduce:
1) Have segment1 with physical network physical_net1
Have segment2 with physical network physical_net2

2) Have 2 agents(host1, host2), both configured with physical_net1. When
the agent created/updated in neutron, there will be a SegmentHostMapping
for segment1->host1, and a SegmentHostMapping for segment1->host2.

3) Update agent at host2 to only configure with physical_net2. There will be 
only one SegmentHostMapping for host2, segment2->host2.
But the SegmentHostMapping for segment1->host1 will also be deleted. This is 
not expected.

** Affects: neutron
 Importance: Undecided
     Assignee: Hong Hui Xiao (xiaohhui)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592463

Title:
  Avoid removing SegmentHostMapping in other host when update agent

Status in neutron:
  In Progress

Bug description:
  Found this when working on OVN, but it should also apply to topology
  with l2 agent.

  Steps to reproduce:
  1) Have segment1 with physical network physical_net1
  Have segment2 with physical network physical_net2

  2) Have 2 agents(host1, host2), both configured with physical_net1.
  When the agent created/updated in neutron, there will be a
  SegmentHostMapping for segment1->host1, and a SegmentHostMapping for
  segment1->host2.

  3) Update agent at host2 to only configure with physical_net2. There will be 
only one SegmentHostMapping for host2, segment2->host2.
  But the SegmentHostMapping for segment1->host1 will also be deleted. This is 
not expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589427] Re: Import undefined attribute PROTO_NAME_IPV6_ICMP_LEGACY

2016-06-06 Thread Hong Hui Xiao
it is defined in neutron_lib, try to update your neutron_lib, by sudo
pip install --update neutron_lib


https://github.com/openstack/neutron-lib/blob/0db5c13fc3d0793096446b54fb0bd9b10b1a8bb2/neutron_lib/constants.py#L132

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589427

Title:
  Import undefined attribute PROTO_NAME_IPV6_ICMP_LEGACY

Status in neutron:
  Invalid

Bug description:
  In neutron/common/constants.py, it uses attribute
  'PROTO_NAME_IPV6_ICMP_LEGACY' imported from neutron_lib.constants ,
  like this:

  
https://github.com/openstack/neutron/blob/master/neutron/common/constants.py#L61

  ```
  IP_PROTOCOL_NAME_ALIASES = {lib_constants.PROTO_NAME_IPV6_ICMP_LEGACY:
  lib_constants.PROTO_NAME_IPV6_ICMP}

  ```

  but the attribute 'PROTO_NAME_IPV6_ICMP_LEGACY' is not defined in
  neutron_lib.constants.

  An AttributeError occured:

  ```
  Traceback (most recent call last):
File "/usr/bin/neutron-ovs-cleanup", line 6, in 
  from neutron.cmd.ovs_cleanup import main
File "/opt/stack/neutron/neutron/cmd/ovs_cleanup.py", line 20, in 
  from neutron.agent.common import config as agent_config
File "/opt/stack/neutron/neutron/agent/common/config.py", line 21, in 

  from neutron.common import config
File "/opt/stack/neutron/neutron/common/config.py", line 33, in 
  from neutron.common import constants
File "/opt/stack/neutron/neutron/common/constants.py", line 62, in 
  lib_constants.PROTO_NAME_IPV6_ICMP}
  AttributeError: 'module' object has no attribute 'PROTO_NAME_IPV6_ICMP_LEGACY'
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1589427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585524] Re: neutron server Error: TooManyExternalNetworks

2016-05-25 Thread Hong Hui Xiao
According to [1], this should be invalid. I have verified that config as
[1], no error will report.

[1]
https://github.com/openstack/neutron/blob/f60291820599804e8bfdaafa0cd0565549daa193/neutron/agent/l3/config.py#L64-L66

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585524

Title:
  neutron server Error:  TooManyExternalNetworks

Status in neutron:
  Invalid

Bug description:
  Main steps:
  1 create 2 external networks each with a different subnet with neutron CLI 
commands, there is no error info from CLI.
  e.g. neutron net-create --router:external=True --provider:physical_network 
provider100 --provider:network_type flat provider100
  2 create 2 routers connected each of the external net, there is no error info 
from CLI.
  3 create 1 floating ip from one of the external network, no error info from 
CLI.
  4 create 1 private network, and try creating a vm connected to the private 
network.
  there is no response with the command: nova boot xxx.
  We can see errors from screen, seems neutron CLI needs more checking when 
creating more external networks.
  q-svc:
  2016-05-25 00:55:39.756 ERROR oslo_messaging.rpc.server 
[req-8ff829a5-2241-4ad0-896e-136b1de3efe7 None None] Exception during handling 
message
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 153, 
in dispatch
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 214, in 
get_external_network_id
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server net_id = 
self.plugin.get_external_network_id(context)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/db/external_net_db.py", line 199, in 
get_external_network_id
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server raise 
n_exc.TooManyExternalNetworks()
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server 
TooManyExternalNetworks: More than one external network exists.
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server
  neutron l3-agent:
  2016-05-24 22:28:22.418 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '69b7ca3c-3aa5-44eb-bec8-8e53accbde64'
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 485, in 
_process_router_update
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 417, in 
_process_router_if_compatible
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent if ex_net_id != 
self._fetch_external_net_id(force=True):
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 297, in 
_fetch_external_net_id
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent raise Exception(msg)
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent Exception: The 
'gateway_external_network_id' option must be configured for this agent as 
Neutron has more than one external network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585078] [NEW] The help message for min_l3_agents_per_router is not accurate

2016-05-24 Thread Hong Hui Xiao
Public bug reported:

[1] says that the min_l3_agents_per_router could be 0, but when set it
to 0, neutron server will report

2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/services/l3_router/l3_router_plugin.py", line 66, 
in __init__
2016-05-24 06:50:30.004 TRACE neutron super(L3RouterPlugin, self).__init__()
2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 186, in __init__
2016-05-24 06:50:30.004 TRACE neutron self._verify_configuration()
2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 171, in 
_verify_configuration
2016-05-24 06:50:30.004 TRACE neutron self._check_num_agents_per_router()
2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 183, in 
_check_num_agents_per_router
2016-05-24 06:50:30.004 TRACE neutron raise 
l3_ha.HAMinimumAgentsNumberNotValid()
2016-05-24 06:50:30.004 TRACE neutron HAMinimumAgentsNumberNotValid: 
min_l3_agents_per_router config parameter is not valid. It has to be equal to 
or more than 2 for HA.

There is even an UT for min_l3_agents_per_router to be 0 at [2]. We
should remove the help message.

[1]
https://github.com/openstack/neutron/blob/557a2d9ece94eedc56bebf554be3f9ffda46186a/neutron/db/l3_hamode_db.py#L67-L68

[2]
https://github.com/openstack/neutron/blob/557a2d9ece94eedc56bebf554be3f9ffda46186a/neutron/tests/unit/db/test_l3_hamode_db.py#L119

** Affects: neutron
     Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585078

Title:
  The help message for min_l3_agents_per_router is not accurate

Status in neutron:
  In Progress

Bug description:
  [1] says that the min_l3_agents_per_router could be 0, but when set it
  to 0, neutron server will report

  2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/services/l3_router/l3_router_plugin.py", line 66, 
in __init__
  2016-05-24 06:50:30.004 TRACE neutron super(L3RouterPlugin, 
self).__init__()
  2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 186, in __init__
  2016-05-24 06:50:30.004 TRACE neutron self._verify_configuration()
  2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 171, in 
_verify_configuration
  2016-05-24 06:50:30.004 TRACE neutron self._check_num_agents_per_router()
  2016-05-24 06:50:30.004 TRACE neutron   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 183, in 
_check_num_agents_per_router
  2016-05-24 06:50:30.004 TRACE neutron raise 
l3_ha.HAMinimumAgentsNumberNotValid()
  2016-05-24 06:50:30.004 TRACE neutron HAMinimumAgentsNumberNotValid: 
min_l3_agents_per_router config parameter is not valid. It has to be equal to 
or more than 2 for HA.

  There is even an UT for min_l3_agents_per_router to be 0 at [2]. We
  should remove the help message.

  [1]
  
https://github.com/openstack/neutron/blob/557a2d9ece94eedc56bebf554be3f9ffda46186a/neutron/db/l3_hamode_db.py#L67-L68

  [2]
  
https://github.com/openstack/neutron/blob/557a2d9ece94eedc56bebf554be3f9ffda46186a/neutron/tests/unit/db/test_l3_hamode_db.py#L119

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585047] [NEW] "DeprecationWarning: PROTO_NAME_IPV6_ICMP_LEGACY" can be observed when running ut for security group

2016-05-23 Thread Hong Hui Xiao
Public bug reported:

The following warning can be observed. Since neutron_lib 0.2.0 contains
"PROTO_NAME_IPV6_ICMP_LEGACY", we should remove the using of it from
neutron.common.constants

{0}
neutron.tests.unit.agent.test_securitygroups_rpc.SGServerRpcCallBackTestCase.test_security_group_rules_for_devices_ipv6_source_group
[2.236118s] ... ok

Captured stderr:

neutron/db/securitygroups_db.py:474: DeprecationWarning: 
PROTO_NAME_IPV6_ICMP_LEGACY in version 'mitaka' and will be removed in version 
'newton': moved to neutron_lib.constants
  n_const.PROTO_NAME_IPV6_ICMP_LEGACY,

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585047

Title:
  "DeprecationWarning: PROTO_NAME_IPV6_ICMP_LEGACY" can be observed when
  running ut for security group

Status in neutron:
  In Progress

Bug description:
  The following warning can be observed. Since neutron_lib 0.2.0
  contains "PROTO_NAME_IPV6_ICMP_LEGACY", we should remove the using of
  it from neutron.common.constants

  {0}
  
neutron.tests.unit.agent.test_securitygroups_rpc.SGServerRpcCallBackTestCase.test_security_group_rules_for_devices_ipv6_source_group
  [2.236118s] ... ok

  Captured stderr:
  
  neutron/db/securitygroups_db.py:474: DeprecationWarning: 
PROTO_NAME_IPV6_ICMP_LEGACY in version 'mitaka' and will be removed in version 
'newton': moved to neutron_lib.constants
n_const.PROTO_NAME_IPV6_ICMP_LEGACY,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584647] [NEW] "Interface monitor is not active" can be observed at ovs-agent start

2016-05-23 Thread Hong Hui Xiao
Public bug reported:

I noticed this error message in neutron-ovs-agent log when start
neutron-openvswitch-agent

ERROR neutron.agent.linux.ovsdb_monitor [req-a7c7a398-a13b-490e-
adf8-c5afb24b4b9c None None] Interface monitor is not active.

ovs-agent will start ovsdb_monitor at [1], and first use it at [2].
There is no guarantee that ovsdb_monitor is ready at [2]. So, I can see
the error when start neutron-openvswitch-agent.

We should block the start to wait for the process to be active, and then
use it. Or else, the use of ovsdb_monitor will be meaningless.


[1]
https://github.com/openstack/neutron/blob/6da27a78f42db00c91a747861eafde7edc6f1fa7/neutron/agent/linux/polling.py#L35

[2]
https://github.com/openstack/neutron/blob/6da27a78f42db00c91a747861eafde7edc6f1fa7/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1994

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584647

Title:
  "Interface monitor is not active" can be observed at ovs-agent start

Status in neutron:
  New

Bug description:
  I noticed this error message in neutron-ovs-agent log when start
  neutron-openvswitch-agent

  ERROR neutron.agent.linux.ovsdb_monitor [req-a7c7a398-a13b-490e-
  adf8-c5afb24b4b9c None None] Interface monitor is not active.

  ovs-agent will start ovsdb_monitor at [1], and first use it at [2].
  There is no guarantee that ovsdb_monitor is ready at [2]. So, I can
  see the error when start neutron-openvswitch-agent.

  We should block the start to wait for the process to be active, and
  then use it. Or else, the use of ovsdb_monitor will be meaningless.


  [1]
  
https://github.com/openstack/neutron/blob/6da27a78f42db00c91a747861eafde7edc6f1fa7/neutron/agent/linux/polling.py#L35

  [2]
  
https://github.com/openstack/neutron/blob/6da27a78f42db00c91a747861eafde7edc6f1fa7/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1994

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1584647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583601] [NEW] Duplicated sg rules could be created with diff description

2016-05-19 Thread Hong Hui Xiao
Public bug reported:

I can create multiple security group rules with same content, but different 
descriptions.
For example,

[fedora@normal2 ~]$ neutron security-group-rule-create test --protocol tcp 
--remote-group-id 1b8c08e5-728d-48ef-a24b-e4ebc20808a3
Created a new security_group_rule:
+---+--+
| Field | Value|
+---+--+
| description   |  |
| direction | ingress  |
| ethertype | IPv4 |
| id| 09eaa983-7884-4c27-bffb-81064d164688 |
| port_range_max|  |
| port_range_min|  |
| protocol  | tcp  |
| remote_group_id   | 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 |
| remote_ip_prefix  |  |
| security_group_id | db8d1386-0b2e-4f0c-b4c2-16c10b30fd92 |
| tenant_id | 02178a7c126a4066ab5c3fae571d89c8 |
+---+--+
[fedora@normal2 ~]$ neutron security-group-rule-create test --protocol tcp 
--remote-group-id 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 --description "123"
Created a new security_group_rule:
+---+--+
| Field | Value|
+---+--+
| description   | 123  |
| direction | ingress  |
| ethertype | IPv4 |
| id| 5282599c-4262-4c48-b999-052a0ce5cff7 |
| port_range_max|  |
| port_range_min|  |
| protocol  | tcp  |
| remote_group_id   | 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 |
| remote_ip_prefix  |  |
| security_group_id | db8d1386-0b2e-4f0c-b4c2-16c10b30fd92 |
| tenant_id | 02178a7c126a4066ab5c3fae571d89c8 |
+---+--+

This should be prevented.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: sg-fw

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583601

Title:
  Duplicated sg rules could be created with diff description

Status in neutron:
  New

Bug description:
  I can create multiple security group rules with same content, but different 
descriptions.
  For example,

  [fedora@normal2 ~]$ neutron security-group-rule-create test --protocol tcp 
--remote-group-id 1b8c08e5-728d-48ef-a24b-e4ebc20808a3
  Created a new security_group_rule:
  +---+--+
  | Field | Value|
  +---+--+
  | description   |  |
  | direction | ingress  |
  | ethertype | IPv4 |
  | id| 09eaa983-7884-4c27-bffb-81064d164688 |
  | port_range_max|  |
  | port_range_min|  |
  | protocol  | tcp  |
  | remote_group_id   | 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 |
  | remote_ip_prefix  |  |
  | security_group_id | db8d1386-0b2e-4f0c-b4c2-16c10b30fd92 |
  | tenant_id | 02178a7c126a4066ab5c3fae571d89c8 |
  +---+--+
  [fedora@normal2 ~]$ neutron security-group-rule-create test --protocol tcp 
--remote-group-id 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 --description "123"
  Created a new security_group_rule:
  +---+--+
  | Field | Value|
  +---+--+
  | description   | 123  |
  | direction | ingress  |
  | ethertype | IPv4 |
  | id| 5282599c-4262-4c48-b999-052a0ce5cff7 |
  | port_range_max|  |
  | port_range_min|  |
  | protocol  | tcp  |
  | remote_group_id   | 1b8c08e5-728d-48ef-a24b-e4ebc20808a3

[Yahoo-eng-team] [Bug 1582087] [NEW] The default value of neutron.qos.notification_drivers should be a list

2016-05-16 Thread Hong Hui Xiao
Public bug reported:

Now the default value of neutron.qos.notification_drivers is a string
[1]. This will not have functional error because the oslo can consume
string and turn to list. But for the sample.conf file, it will generate
something like this:

#
# From neutron.qos
#

# Drivers list to use to send the update notification (list value)
#notification_drivers = m,e,s,s,a,g,e,_,q,u,e,u,e

which is not user-friendly. Change the default value to a list can
eliminate this issue.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582087

Title:
  The default value of neutron.qos.notification_drivers should be a list

Status in neutron:
  New

Bug description:
  Now the default value of neutron.qos.notification_drivers is a string
  [1]. This will not have functional error because the oslo can consume
  string and turn to list. But for the sample.conf file, it will
  generate something like this:

  #
  # From neutron.qos
  #

  # Drivers list to use to send the update notification (list value)
  #notification_drivers = m,e,s,s,a,g,e,_,q,u,e,u,e

  which is not user-friendly. Change the default value to a list can
  eliminate this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581348] [NEW] Can't delete a v4 csnat port when there is a v6 router interface attached

2016-05-13 Thread Hong Hui Xiao
Public bug reported:

Reproduce:
1) I enable DVR in devstack. After installation, there is a DVR, an ipv4+ipv6 
router gateway in DVR, an ipv4 router interface in DVR, and an ipv6 router 
interface in DVR.

2) I want to use delete the v4 subnet. So, I delete the ipv4 router gateway.
[fedora@normal-dvr devstack]$ neutron router-interface-delete router1 
private-subnet
Removed interface from router router1.

3) I try to delete the v4 subnet, but neutron server tell me that the subnet 
can't be deleted, because there is still port(s) being used.
[fedora@normal-dvr devstack]$ neutron subnet-delete private-subnet
Unable to complete operation on subnet d0282930-95ca-4f64-9ae9-8c22be9cb3ab: 
One or more ports have an IP allocation from this subnet.

4) Check the port-list, I found the csnat port is still there.
[fedora@normal-dvr devstack]$ neutron port-list
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   

|
+--+--+---+-+

| bf042acf-40d5-4503-b62e-7389a6fc9bca |  | fa:16:3e:47:a5:40 | 
{"subnet_id": "d0282930-95ca-4f64-9ae9-8c22be9cb3ab", "ip_address": "10.0.0.3"} 
|
+--+--+---+-+

5) But look into the snat namespace, there is no such port there.


Then I can't delete the subnet, because the port is there. I can't delete the 
port, because the port has a device owner network:router_centralized_snat. I 
even can't attach the subnet back to DVR, neutron server will tell me:  Router 
already has a port on subnet.

This problem will not be reproduce if there is no ipv6 subnet attached
to DVR.

Expect: Can use ipv4 no matter if there is ipv6 subnet attached to DVR.

** Affects: neutron
     Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: ipv6 l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: l3-dvr-backlog

** Tags added: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581348

Title:
  Can't delete a v4 csnat port when there is a v6 router interface
  attached

Status in neutron:
  New

Bug description:
  Reproduce:
  1) I enable DVR in devstack. After installation, there is a DVR, an ipv4+ipv6 
router gateway in DVR, an ipv4 router interface in DVR, and an ipv6 router 
interface in DVR.

  2) I want to use delete the v4 subnet. So, I delete the ipv4 router gateway.
  [fedora@normal-dvr devstack]$ neutron router-interface-delete router1 
private-subnet
  Removed interface from router router1.

  3) I try to delete the v4 subnet, but neutron server tell me that the subnet 
can't be deleted, because there is still port(s) being used.
  [fedora@normal-dvr devstack]$ neutron subnet-delete private-subnet
  Unable to complete operation on subnet d0282930-95ca-4f64-9ae9-8c22be9cb3ab: 
One or more ports have an IP allocation from this subnet.

  4) Check the port-list, I found the csnat port is still there.
  [fedora@normal-dvr devstack]$ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 

  |
  
+--+--+---+-+

  | bf042acf-40d5-4503-b62e-7389a6fc9bca |  | fa:16:3e:47:a5:40 | 
{"subnet_id": "d0282930-95ca-4f64-9ae9-8c22be9cb3ab", "ip_address": "10.0.0.3"} 
|
  
+--+--+---+-+

  5) But look into the snat namespace, there is no such port there.

  
  Then I can't delete the subnet, because the port is there. I can't delete the 
port, because the port has a device owner network:router_centralized_snat. I 
even can't attach the subnet back to DVR, neutron server will tell me:  Router 
already has

[Yahoo-eng-team] [Bug 1557002] [NEW] isolated metadata proxy will not be updated when router interface add/delete

2016-03-14 Thread Hong Hui Xiao
Public bug reported:

When create/delete router interface, the isolated metadata proxy for
isolated network will not be updated. It will cause two issues.

a) The isolated metadata proxy process will still be there, even if no
subnet will use it. It will waste the resource of host, especially when
there are many networks.

Reproduce:
1) Set "enable_isolated_metadata = True" in configuration.
2) Create a network.
3) Create a ipv4 subnet for the network.
4) Attach the subnet to a router.
The  isolated metadata proxy process is useless now, but it is still there. 
Even if I restarted the dhcp-agent, the process will not be killed.

b) The  isolated metadata proxy process will not be spawned, when a
subnet becomes isolated.

Reproduce:
1) Set "enable_isolated_metadata = True" in configuration.
2) Create a network.
3) Create a ipv4 subnet for the network.
4) Attach the subnet to a router.
5) Update the network with "neutron net-update test-net --admin_state_up False" 
The isolated metadata proxy should be killed now.
6) Update the network with "neutron net-update test-net --admin_state_up True" 
7) Detach the subnet from the router. The subnet becomes isolated, but the  
isolated metadata proxy process will not be spawned. And the isolated metadata 
service can be used.

Bug [1] introduced a way to update the network on dhcp agent when
create/delete router interface. The fix can be based on that bug, to
update the metadata proxy process according to the change of router
interface.

[1] https://bugs.launchpad.net/neutron/+bug/1544515

** Affects: neutron
     Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557002

Title:
  isolated metadata proxy will not be updated when router interface
  add/delete

Status in neutron:
  New

Bug description:
  When create/delete router interface, the isolated metadata proxy for
  isolated network will not be updated. It will cause two issues.

  a) The isolated metadata proxy process will still be there, even if no
  subnet will use it. It will waste the resource of host, especially
  when there are many networks.

  Reproduce:
  1) Set "enable_isolated_metadata = True" in configuration.
  2) Create a network.
  3) Create a ipv4 subnet for the network.
  4) Attach the subnet to a router.
  The  isolated metadata proxy process is useless now, but it is still there. 
Even if I restarted the dhcp-agent, the process will not be killed.

  b) The  isolated metadata proxy process will not be spawned, when a
  subnet becomes isolated.

  Reproduce:
  1) Set "enable_isolated_metadata = True" in configuration.
  2) Create a network.
  3) Create a ipv4 subnet for the network.
  4) Attach the subnet to a router.
  5) Update the network with "neutron net-update test-net --admin_state_up 
False" The isolated metadata proxy should be killed now.
  6) Update the network with "neutron net-update test-net --admin_state_up 
True" 
  7) Detach the subnet from the router. The subnet becomes isolated, but the  
isolated metadata proxy process will not be spawned. And the isolated metadata 
service can be used.

  Bug [1] introduced a way to update the network on dhcp agent when
  create/delete router interface. The fix can be based on that bug, to
  update the metadata proxy process according to the change of router
  interface.

  [1] https://bugs.launchpad.net/neutron/+bug/1544515

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549860] Re: neutron metadata- after adding a network port to a neutron router, static route is still sent to new VMs

2016-03-14 Thread Hong Hui Xiao
*** This bug is a duplicate of bug 1554825 ***
https://bugs.launchpad.net/bugs/1554825

I am going to close this bug, because the behavior in the description

"After dhcp agent reboot the route to qdhcp is removed from
/var/lib/neutron/dhcp/XYZ/opts and router will handle metadata."

is fixed by this bug: bug 1554825.

With the patch in that bug, when router interface is added/deleted, the
dhcp agent will update the route of metadata service. So, no need to
restart the dhcp agent now, the router will take/leave metadata service.


** This bug has been marked a duplicate of bug 1554825
   Cached network object in DHCP agent not updated with router interface changes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549860

Title:
  neutron metadata- after adding a network port to a neutron router,
  static route is still sent to new VMs

Status in neutron:
  Confirmed

Bug description:
   Metaadata isolated configured. Network and subnet created - ( no 
--no-gateway option but the network is not connected to a router - there is no 
router). VM booted - metadata is OK and we have a static route passed to the 
VM. 
  Created a router and attached a network port to it ( same network) . Created 
a new VM and still qdhcp will handle the metadata even though we have the 
network configured with DG ( /var/lib/neutron/dhcp/XYZ/opts has a static route 
for 169.254.169.254 and passes it to the  new VMs- This is OK ONLY before we 
added the network to the router) .  After dhcp agent reboot the route to qdhcp 
is removed from /var/lib/neutron/dhcp/XYZ/opts and router will handle metadata.

  
  I strongly believe that if we made no change in the dhcp-agent.ini file, 
agent restart ( systemctl restart neutron-dhcp-agent) should not affect 
behavior in any way. 

  Additionally, 
  The static route should be removed from /var/lib/neutron/dhcp/XYZ/opts right 
after the network "XYZ" was added to the router. 

  BR 
  Alex 

  
  Version- Kilo

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554066] Re: after adding network port to router , static route is not removed from options- metadata- enable_isolated_metadata = True

2016-03-14 Thread Hong Hui Xiao
I am going to close this bug as invalid. The static route to router
interface seems useless. Bug according to bug 1460793, it is for windows
VM. I think the bug description in that bug can explain why the route is
needed.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554066

Title:
  after adding network port to router , static route is not removed from
  options- metadata- enable_isolated_metadata = True

Status in neutron:
  Invalid

Bug description:
  metadata- enable_isolated_metadata = True: After attaching network to
  router and restarting dhcp agent, static route is not removed from
  options

  In /etc/neutron/dhcp_agent.ini:
  force_metadata = False
  enable_isolated_metadata = True
  enable_metadata_network = False

  
  The commands show that the networl port is attached to the router r1:

  [stack@undercloud ~]$ neutron net-list 
  
+--++---+
  | id   | name 
  | subnets   |
  
+--++---+
  | 975e1494-7596-491a-ac0c-de12a3e366f0 | HA network tenant 
0402cbe1a718487388d3dec5ed992ed6 | fdccb47e-9f09-40d3-b9f7-ded9c52a4710 
169.254.192.0/18 |
  | e0d2d44f-cb27-48df-acac-b5e8a75e7851 | nova 
  | d9d35981-e33a-4bc4-875e-80103b36aeb7 192.168.1.0/24   |
  | 107277c1-ba33-4e50-93b2-750c8bc594a6 | int_net  
  | 058d7bdb-ebaf-48eb-8705-17c7d0ce2edc 192.168.2.0/24   |
  
+--++---+


  [stack@undercloud ~]$ neutron router-port-list r1
  
+--+-+---+--+
  | id   | name 
   | mac_address   | fixed_ips  
  |
  
+--+-+---+--+
  | 101b079e-bced-43d1-8c28-7e8a92b15ebc | HA port tenant 
0402cbe1a718487388d3dec5ed992ed6 | fa:16:3e:0d:85:af | {"subnet_id": 
"fdccb47e-9f09-40d3-b9f7-ded9c52a4710", "ip_address": "169.254.192.1"} |
  | cf129905-b53c-4e5b-b4a7-b27ab0393493 | HA port tenant 
0402cbe1a718487388d3dec5ed992ed6 | fa:16:3e:f6:bf:ab | {"subnet_id": 
"fdccb47e-9f09-40d3-b9f7-ded9c52a4710", "ip_address": "169.254.192.2"} |
  | eb5a7a0c-a392-4a17-98b5-99d9c3130349 |  
   | fa:16:3e:be:99:10 | {"subnet_id": 
"058d7bdb-ebaf-48eb-8705-17c7d0ce2edc", "ip_address": "192.168.2.1"}   |
  
+--+-+---+--+


  Here we will see that the static route still exists:

  [root@overcloud-controller-2 ~]# cat 
/var/lib/neutron/dhcp/107277c1-ba33-4e50-93b2-750c8bc594a6/opts 
  tag:tag0,option:dns-server,10.35.28.28
  
tag:tag0,option:classless-static-route,169.254.169.254/32,192.168.2.1,0.0.0.0/0,192.168.2.1
  tag:tag0,249,169.254.169.254/32,192.168.2.1,0.0.0.0/0,192.168.2.1
  tag:tag0,option:router,192.168.2.1[root@overcloud-controller-2 ~]# 

  
  With enable_isolated_metadata = True configuration - When attaching network 
port to a neutron router, Static route in /var/lib/neutron/NETID/opts should be 
removed ( no agent restrt should be needed - 
https://bugs.launchpad.net/neutron/+bug/1549860 ) 

  
  Liberty

  Reproducible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555027] [NEW] Can't get IPV6 address from radvd when router interface is added after vm creation

2016-03-09 Thread Hong Hui Xiao
Public bug reported:

Env: upstream code with linux bridge agent

Steps to reproduce:
1) Create network
2) create ipv4 and ipv6 subnet(ipv6_address_mode:slaac,  ipv6_ra_mode:slaac )
3) boot vm in the network
4) Add subnet(both ipv4 and ipv6) as router interfaces
5) Check the vm's IPV6 address

Expected: the vm has ipv6 address config and usable
Actual: the ipv6 prefix which should come from radvd is not configed to vm

Reboot vm will not help. Boot another vm in the subnet can get ipv6
address for both of them

radvdump in router namespace can get the RA information of subnets.

Compared the ip6tables, there is one rule missed:
 
Chain neutron-linuxbri-id7a6cff0-e (1 references)
1   104 RETURN icmpv6*  *   fe80::f816:3eff:fe4e:eb56  ::/0 
ipv6-icmptype 134

Boot another vm will make this rule added, add it manually will also
make things right.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: ipv6

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555027

Title:
  Can't get IPV6 address from radvd when router interface is added after
  vm creation

Status in neutron:
  New

Bug description:
  Env: upstream code with linux bridge agent

  Steps to reproduce:
  1) Create network
  2) create ipv4 and ipv6 subnet(ipv6_address_mode:slaac,  ipv6_ra_mode:slaac )
  3) boot vm in the network
  4) Add subnet(both ipv4 and ipv6) as router interfaces
  5) Check the vm's IPV6 address

  Expected: the vm has ipv6 address config and usable
  Actual: the ipv6 prefix which should come from radvd is not configed to vm

  Reboot vm will not help. Boot another vm in the subnet can get ipv6
  address for both of them

  radvdump in router namespace can get the RA information of subnets.

  Compared the ip6tables, there is one rule missed:
   
  Chain neutron-linuxbri-id7a6cff0-e (1 references)
  1   104 RETURN icmpv6*  *   fe80::f816:3eff:fe4e:eb56  
::/0 ipv6-icmptype 134

  Boot another vm will make this rule added, add it manually will also
  make things right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554392] [NEW] Set extra route for DVR might cause error

2016-03-07 Thread Hong Hui Xiao
Public bug reported:

With a DVR router. I have 
external network: 172.24.4.0/24
internal network: 10.0.0.0/24

I want to set an extra route for it, so I execute the following command:

neutron router-update router1 --route
destination=20.0.0.0/24,nexthop=172.24.4.6

But I get this error at the output of neutron-l3-agent.

ERROR neutron.agent.linux.utils [-] Exit code: 2; Stdin: ; Stdout: ;
Stderr: RTNETLINK answers: Network is unreachable

The reason for it is that the DVR router will set extra route to snat
and qrouter namespace. However, qrouter namespace will not have the
route to external network, so error is reported when l3-agent try to add
a route with nexthop to external network to qroute namespace.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554392

Title:
  Set extra route for DVR might cause error

Status in neutron:
  New

Bug description:
  With a DVR router. I have 
  external network: 172.24.4.0/24
  internal network: 10.0.0.0/24

  I want to set an extra route for it, so I execute the following
  command:

  neutron router-update router1 --route
  destination=20.0.0.0/24,nexthop=172.24.4.6

  But I get this error at the output of neutron-l3-agent.

  ERROR neutron.agent.linux.utils [-] Exit code: 2; Stdin: ; Stdout: ;
  Stderr: RTNETLINK answers: Network is unreachable

  The reason for it is that the DVR router will set extra route to snat
  and qrouter namespace. However, qrouter namespace will not have the
  route to external network, so error is reported when l3-agent try to
  add a route with nexthop to external network to qroute namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548217] Re: Revert the unused code for address scope

2016-03-06 Thread Hong Hui Xiao
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548217

Title:
  Revert the unused code for address scope

Status in neutron:
  Fix Released

Bug description:
  This bug is to revert the code in [1] , which is not used by address
  scope finally.

  
  [1] https://review.openstack.org/#/c/192032/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552964] Re: neutron subnet gateway is inconsistent with port ip

2016-03-03 Thread Hong Hui Xiao
The gateway in subnet can be treated as default gateway of the subnet.
However, the router interface is the port that plug to router. They are
not the same thing.

When you create router interface, you can specify subnet. And at this
time, the default gateway will be used.

But if you specify port when create router interface, you can make any
number of ports in a subnet to be router interface.

So, I think this it works as designed.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552964

Title:
  neutron subnet gateway is inconsistent with port ip

Status in neutron:
  Invalid

Bug description:
  We are using liberty.
  When attaching a subnet to its tenant router, we specify the subnet gateway 
to be 1.1.1.44.
  However, the subnet still uses 1.1.1.1 as its gateway.
  Following is the example.

  [root@numa1 ~(keystone_admin)]# neutron net-list
  
+--+--+-+
  | id   | name | subnets   
  |
  
+--+--+-+
  | 4e461989-e99f-438e-87d7-46456eb3559c | xin  | 
016de55b-52d7-4b99-af72-586935749a02 1.1.1.0/24 |
  
+--+--+-+
  [root@numa1 ~(keystone_admin)]# neutron subnet-show 
016de55b-52d7-4b99-af72-586935749a02
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "1.1.1.2", "end": "1.1.1.254"} |
  | cidr  | 1.1.1.0/24   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 1.1.1.1  |
  | host_routes   |  |
  | id| 016de55b-52d7-4b99-af72-586935749a02 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | xin  |
  | network_id| 4e461989-e99f-438e-87d7-46456eb3559c |
  | subnetpool_id |  |
  | tenant_id | 3a8ac6432efe4e90b79554270cda915d |
  +---+--+
  [root@numa1 ~(keystone_admin)]# neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | bab59f8d-6d99-4619-bb7e-aeb5a43588a8 |  | fa:16:3e:de:ea:7e | 
{"subnet_id": "016de55b-52d7-4b99-af72-586935749a02", "ip_address": "1.1.1.2"}  
|
  | fcb53ef0-3178-4149-97da-7cc8826ad6bd |  | fa:16:3e:92:86:d2 | 
{"subnet_id": "016de55b-52d7-4b99-af72-586935749a02", "ip_address": "1.1.1.44"} 
|
  
+--+--+---+-+
  [root@numa1 ~(keystone_admin)]# neutron port-show 
fcb53ef0-3178-4149-97da-7cc8826ad6bd
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:host_id   | 
 |
  | binding:profile   | {}  
 |
  | binding:vif_details   | {}  
 |
  | binding:vif_type  | unbound 
  

[Yahoo-eng-team] [Bug 1551530] [NEW] With snat disabled legacy router Pings to floating IPs replied with fixed-ips

2016-02-29 Thread Hong Hui Xiao
Public bug reported:

On my single node devstack setup, there are 2 VMs hosted. VM1 has no floating 
IP assigned. VM2 has a floating IP assigned. From VM1, ping to VM2 using the 
floating IP. Ping output reports the replies comes from VM2's fixed ip address.
The reply should be from VM2's floating ip address.

VM1: 10.0.0.4
VM2: 10.0.0.3  floating ip:172.24.4.4

$ ping 172.24.4.4 -c 1 -W 1
PING 172.24.4.4 (172.24.4.4): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=3.440 ms

This will only happen for legacy router with snat disabled, and at the
same time, VM1 and VM2 are in the same subnet.

Compared the iptables, this following rule is missed when snat is
disabled.

Chain neutron-vpn-agen-snat (1 references)
 pkts bytes target prot opt in out source   destination 

184 SNAT   all  --  *  *   0.0.0.0/00.0.0.0/0   
 mark match ! 0x2/0x ctstate DNAT to:172.24.4.6

This rule will SNAT internal traffic to floatingip. Without this rule,
the packet of VM2 replying VM1 will be treated as a traffic inside
subnet, and these traffic will not go through router. As a result, the
DNAT record in router namespace will not work for reply packet.

The intentional fix will add the mentioned iptables rule, no matter of
snat enabling. So, the packet of VM2 replying VM1 will dest to
<172.24.4.6>, and go through router namespace. As a result, the DNAT and
SNAT record will work to make things right.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551530

Title:
  With snat disabled legacy router Pings to floating IPs replied with
  fixed-ips

Status in neutron:
  New

Bug description:
  On my single node devstack setup, there are 2 VMs hosted. VM1 has no floating 
IP assigned. VM2 has a floating IP assigned. From VM1, ping to VM2 using the 
floating IP. Ping output reports the replies comes from VM2's fixed ip address.
  The reply should be from VM2's floating ip address.

  VM1: 10.0.0.4
  VM2: 10.0.0.3  floating ip:172.24.4.4

  $ ping 172.24.4.4 -c 1 -W 1
  PING 172.24.4.4 (172.24.4.4): 56 data bytes
  64 bytes from 10.0.0.3: seq=0 ttl=64 time=3.440 ms

  This will only happen for legacy router with snat disabled, and at the
  same time, VM1 and VM2 are in the same subnet.

  Compared the iptables, this following rule is missed when snat is
  disabled.

  Chain neutron-vpn-agen-snat (1 references)
   pkts bytes target prot opt in out source   
destination 
  184 SNAT   all  --  *  *   0.0.0.0/00.0.0.0/0 
   mark match ! 0x2/0x ctstate DNAT to:172.24.4.6

  This rule will SNAT internal traffic to floatingip. Without this rule,
  the packet of VM2 replying VM1 will be treated as a traffic inside
  subnet, and these traffic will not go through router. As a result, the
  DNAT record in router namespace will not work for reply packet.

  The intentional fix will add the mentioned iptables rule, no matter of
  snat enabling. So, the packet of VM2 replying VM1 will dest to
  <172.24.4.6>, and go through router namespace. As a result, the DNAT
  and SNAT record will work to make things right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542819] Re: The details of security group contains "null"

2016-02-25 Thread Hong Hui Xiao
@libosvar, thanks for pointing out. I just checked the code of
neutronclient, and I found that the "any" in the output "neutron
security-group-rule-show" is not a simple convert from null to "any".

So, we can't simply change all null to "any" in [2].  For example, when
remote-ip-prefix is null and remote-group-id is not null, the output
will like the following after convert.

{ 
 "remote_group_id": "1c3ec647-7377-43c8-a046-4303b8d2b521", 
 "remote_ip_prefix": "any",  
 }  

I think the result is more confusing. So, I will close this bug as
invalid.

[2] http://paste.openstack.org/show/488142/

** Changed in: neutron
   Status: Triaged => Invalid

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542819

Title:
  The details of security group contains "null"

Status in python-neutronclient:
  Invalid

Bug description:
  When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
  Under the same condition, "neutron security-group-rule-list" will report 
"any". However, "neutron security-group-rule-show" will report empty.

  The details can be found at [1].

  I think, if the value it not specified for a security group rule, we
  could show "any" to user. This will make the output be consistent, and
  the more easily to understand.

  [1]  http://paste.openstack.org/show/486190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1542819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542819] Re: The details of security group contains "null"

2016-02-24 Thread Hong Hui Xiao
>From the output [2], it should be a neutron issue, instead of
neutronclient.

[2] http://paste.openstack.org/show/488142/

** Project changed: python-neutronclient => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542819

Title:
  The details of security group contains "null"

Status in neutron:
  Triaged

Bug description:
  When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
  Under the same condition, "neutron security-group-rule-list" will report 
"any". However, "neutron security-group-rule-show" will report empty.

  The details can be found at [1].

  I think, if the value it not specified for a security group rule, we
  could show "any" to user. This will make the output be consistent, and
  the more easily to understand.

  [1]  http://paste.openstack.org/show/486190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549311] [NEW] Unexpected SNAT behavior between instances with DVR+floating ip

2016-02-24 Thread Hong Hui Xiao
Public bug reported:

This might be related with [1]. The fix in [1] should be applied to
dvr_local_router.

= Scenario =

• Latest code
• Single Neutron DVR router, multiple hosts
• two instances in two tenant networks attached to DVR router, the two 
instances are in two different hosts
• Instance A has a floatingip

INSTANCE A: TestNet1=100.0.0.4, 172.168.1.53
INSTANCE B: TestNet2=100.0.1.4

Pinging from INSTANCE A to INSTANCE B:
tcpdump from Instance B
[root@dvr-compute2 fedora]# tcpdump -ni qr-ca45d1e3-5d icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qr-ca45d1e3-5d, link-type EN10MB (Ethernet), capture size 262144 
bytes
14:31:54.054629 IP 100.0.1.4 > 172.168.1.53: ICMP echo reply, id 18433, seq 0, 
length 64


[1] https://bugs.launchpad.net/neutron/+bug/1505781

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549311

Title:
  Unexpected SNAT behavior between instances with DVR+floating ip

Status in neutron:
  New

Bug description:
  This might be related with [1]. The fix in [1] should be applied to
  dvr_local_router.

  = Scenario =

  • Latest code
  • Single Neutron DVR router, multiple hosts
  • two instances in two tenant networks attached to DVR router, the two 
instances are in two different hosts
  • Instance A has a floatingip

  INSTANCE A: TestNet1=100.0.0.4, 172.168.1.53
  INSTANCE B: TestNet2=100.0.1.4

  Pinging from INSTANCE A to INSTANCE B:
  tcpdump from Instance B
  [root@dvr-compute2 fedora]# tcpdump -ni qr-ca45d1e3-5d icmp
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on qr-ca45d1e3-5d, link-type EN10MB (Ethernet), capture size 262144 
bytes
  14:31:54.054629 IP 100.0.1.4 > 172.168.1.53: ICMP echo reply, id 18433, seq 
0, length 64

  
  [1] https://bugs.launchpad.net/neutron/+bug/1505781

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548217] [NEW] Revert the unused code for address scope

2016-02-22 Thread Hong Hui Xiao
Public bug reported:

This bug is to revert the code in [1] , which is not used by address
scope finally.


[1] https://review.openstack.org/#/c/192032/

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: address-scopes

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: address-scopes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548217

Title:
  Revert the unused code for address scope

Status in neutron:
  New

Bug description:
  This bug is to revert the code in [1] , which is not used by address
  scope finally.

  
  [1] https://review.openstack.org/#/c/192032/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547380] [NEW] Correlate address scope with network

2016-02-19 Thread Hong Hui Xiao
Public bug reported:

With address scope being enabled, networks now are in one ipv4 address
scope and one ipv6 address scope. But now, it is difficult to  find out
the address scopes that the network is in. User have to check in this
way: network->subnet->subnet pool -> address scope.

This bug is to add a read-only, derived attribute to the network as part
of the address scopes extension that will show related address scopes
when viewing a network through the API.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: address-scopes l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: l3-ipam

** Tags removed: l3-ipam
** Tags added: l3-ipam-dhcp

** Tags added: address-scopes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547380

Title:
  Correlate address scope with network

Status in neutron:
  New

Bug description:
  With address scope being enabled, networks now are in one ipv4 address
  scope and one ipv6 address scope. But now, it is difficult to  find
  out the address scopes that the network is in. User have to check in
  this way: network->subnet->subnet pool -> address scope.

  This bug is to add a read-only, derived attribute to the network as
  part of the address scopes extension that will show related address
  scopes when viewing a network through the API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1547380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543885] [NEW] Support using floating IP to connect internal subnets in different address scopes

2016-02-09 Thread Hong Hui Xiao
Public bug reported:

This is a known limitation when discuss address scope with Carl.

In the implementation of [1], floating ip could be used to connect
internal subnet and external subnet in different address scopes. But the
internal subnets that are in the same address scope of external network
are isolated. This is diagramed in [2]

Because they are in the same address scope, it is fair for fixed IP A to
connect internal networks in address scope 2, just as it could connect
to the external network in address scope 2.

The target of this bug is to build the connection of fixed IP A and
internal networks in address scope2. This is diagramed in [3]

[1] https://review.openstack.org/#/c/270001/
[2] http://paste.openstack.org/show/486507/
[3] http://paste.openstack.org/show/486508/

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: address-scopes

** Tags added: address-scopes

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543885

Title:
  Support using floating IP to connect internal subnets in different
  address scopes

Status in neutron:
  New

Bug description:
  This is a known limitation when discuss address scope with Carl.

  In the implementation of [1], floating ip could be used to connect
  internal subnet and external subnet in different address scopes. But
  the internal subnets that are in the same address scope of external
  network are isolated. This is diagramed in [2]

  Because they are in the same address scope, it is fair for fixed IP A
  to connect internal networks in address scope 2, just as it could
  connect to the external network in address scope 2.

  The target of this bug is to build the connection of fixed IP A and
  internal networks in address scope2. This is diagramed in [3]

  [1] https://review.openstack.org/#/c/270001/
  [2] http://paste.openstack.org/show/486507/
  [3] http://paste.openstack.org/show/486508/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542819] Re: The details of security group contains "null"

2016-02-09 Thread Hong Hui Xiao
I reported this bug, because I think it will be hard for a non-
programmer to understand the meaning. And the output of security-group
cli are inconsistent(null, any, empty).

@nate-johnston, I agree with you that hiding the null value may be in-
appropriate, because it is implicit.

I will update the bug to change the null/empty value to "any" to see
where the discussion goes.

** Description changed:

  When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
  Under the same condition, "neutron security-group-rule-list" will report 
"any". However, "neutron security-group-rule-show" will report empty.
  
  The details can be found at [1].
  
- I think, if the value it not specified for a security group rule, we can
- hide it from the output of "neutron security-group-show". It is
- meaningless to show a "null" to user.
- 
+ I think, if the value it not specified for a security group rule, we
+ could show "any" to user. This will make the output be consistent, and
+ the more easily to understand.
  
  [1]  http://paste.openstack.org/show/486190/

** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542819

Title:
  The details of security group contains "null"

Status in neutron:
  New

Bug description:
  When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
  Under the same condition, "neutron security-group-rule-list" will report 
"any". However, "neutron security-group-rule-show" will report empty.

  The details can be found at [1].

  I think, if the value it not specified for a security group rule, we
  could show "any" to user. This will make the output be consistent, and
  the more easily to understand.

  [1]  http://paste.openstack.org/show/486190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420056] Re: Deleting last rule in Security Group does not update firewall

2016-02-08 Thread Hong Hui Xiao
*** This bug is a duplicate of bug 1460562 ***
https://bugs.launchpad.net/bugs/1460562

I just checked, the bug can't be reproduced in the latest code. After
checking the history, the bug has been fixed at [1]. I will close this
bug as duplicated.


[1] 
https://github.com/openstack/neutron/blob/764f018f50ac7cd42c29efeabaccbb5aec21f6f4/neutron/db/securitygroups_rpc_base.py#L208-L212

** This bug has been marked a duplicate of bug 1460562
   ipset can't be destroyed when last sg rule is deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420056

Title:
  Deleting last rule in Security Group does not update firewall

Status in neutron:
  In Progress

Bug description:
  
  Scenario:
   VM port with 1 Security Group with 1 egress icmp rule
  (example rule:
  {u'ethertype': u'IPv4', u'direction': u'egress', u'protocol': u'icmp', 
u'dest_ip_prefix': u'0.0.0.0/0'}
  )

  Steps:
   Delete the (last) rule from the above Security Group via Horizon

  Result:
  Find that iptables  shows the egress icmp rule even after its deletion

  Root Cause:
  In this scenario, security_group_info_for_devices() returns the following 
to the agent: Note that the
   'security_groups ' field is an empty dictionary {} !! this causes 
_update_security_groups_info in the agent to NOT update firewall.

  The security_groups field must contain the security_group_id as key
  with an empty list for the rules.

  
  {u'sg_member_ips': {}, u'devices': {u'ea19fb55-39bb-4e59-9d10-26c74eb3ff95': 
{u'status': u'ACTIVE', u'security_group_source_groups': [], u'binding:host_id': 
u'vRHEL29-1', u'name': u'', u'allowed_address_pairs': [{u'ip_address': 
u'10.0.0.201', u'mac_address': u'fa:16:3e:02:4b:b3'}, {u'ip_address': 
u'10.0.10.202', u'mac_address': u'fa:16:3e:02:4b:b3'}, {u'ip_address': 
u'10.0.20.203', u'mac_address': u'fa:16:3e:02:4b:b3'}], u'admin_state_up': 
True, u'network_id': u'f665dc8c-76da-4fde-8d26-535871487e4c', u'tenant_id': 
u'f5019aeae9e64443970bb0842e22e2b3', u'extra_dhcp_opts': [], 
u'security_group_rules': [{u'source_port_range_min': 67, u'direction': 
u'ingress', u'protocol': u'udp', u'ethertype': u'IPv4', u'port_range_max': 68, 
u'source_port_range_max': 67, u'source_ip_prefix': u'10.0.2.3', 
u'port_range_min': 68}], u'binding:vif_details': {u'port_filter': False}, 
u'binding:vif_type': u'bridge', u'device_owner': u'compute:nova', 
u'mac_address': u'fa:16:3e:02:4b:b3', u'device': u'tapea19fb5
 5-39', u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips': 
[u'10.0.2.6'], u'id': u'ea19fb55-39bb-4e59-9d10-26c74eb3ff95', 
u'security_groups': [u'849ee59c-d100-4940-930b-44e358775ed3'], u'device_id': 
u'2b330c29-c16f-4bbf-b80a-bd5bae41b514'}}, u'security_groups': {}} 
security_group_info_for_devices 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py:104

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542819] [NEW] The details of security group contains "null"

2016-02-07 Thread Hong Hui Xiao
Public bug reported:

When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
Under the same condition, "neutron security-group-rule-list" will report "any". 
However, "neutron security-group-rule-show" will report empty.

The details can be found at [1].

I think, if the value it not specified for a security group rule, we can
hide it from the output of "neutron security-group-show". It is
meaningless to show a "null" to user.


[1]  http://paste.openstack.org/show/486190/

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542819

Title:
  The details of security group contains "null"

Status in neutron:
  New

Bug description:
  When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
  Under the same condition, "neutron security-group-rule-list" will report 
"any". However, "neutron security-group-rule-show" will report empty.

  The details can be found at [1].

  I think, if the value it not specified for a security group rule, we
  can hide it from the output of "neutron security-group-show". It is
  meaningless to show a "null" to user.


  [1]  http://paste.openstack.org/show/486190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540064] [NEW] neutron router-update --routes should prevent user from adding existing cidr

2016-01-31 Thread Hong Hui Xiao
Public bug reported:

Find this when do some debug.
Steps to reproduce:
1) I have a router with some subnets connected to it, for example 100.0.0.0/24 
100.0.1.0/24. There will be some routes that are created by the kernel for the 
router interfaces.
# ip r
100.0.1.0/24 dev qr-fef63af2-82  proto kernel  scope link  src 100.0.1.1 
100.0.0.0/24 dev qr-af2ae2b0-8c  proto kernel  scope link  src 100.0.0.1

2) I update the extra route by (just for testing)
neutron router-update router1 --route 
destination=100.0.1.0/24,nexthop=100.0.0.3 

3) The route that was for 100.0.1.0/24 will be replaced.
# ip r
100.0.1.0/24 via 100.0.0.3 dev qr-af2ae2b0-8c 
100.0.0.0/24 dev qr-af2ae2b0-8c  proto kernel  scope link  src 100.0.0.1

4) I clear the extra route that I added by using 
neutron router-update router1 --no-routes
The I get the following routes in router namespace:
# ip r
100.0.0.0/24 dev qr-af2ae2b0-8c  proto kernel  scope link  src 100.0.0.1

As a result, I lost the connection to 100.0.1.0/24.


I think neutron should prevent user from adding extra routes to the existing 
cidrs that have been connected to router(as interface or as gateway).

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540064

Title:
  neutron router-update --routes should prevent user from adding
  existing cidr

Status in neutron:
  New

Bug description:
  Find this when do some debug.
  Steps to reproduce:
  1) I have a router with some subnets connected to it, for example 
100.0.0.0/24 100.0.1.0/24. There will be some routes that are created by the 
kernel for the router interfaces.
  # ip r
  100.0.1.0/24 dev qr-fef63af2-82  proto kernel  scope link  src 100.0.1.1 
  100.0.0.0/24 dev qr-af2ae2b0-8c  proto kernel  scope link  src 100.0.0.1

  2) I update the extra route by (just for testing)
  neutron router-update router1 --route 
destination=100.0.1.0/24,nexthop=100.0.0.3 

  3) The route that was for 100.0.1.0/24 will be replaced.
  # ip r
  100.0.1.0/24 via 100.0.0.3 dev qr-af2ae2b0-8c 
  100.0.0.0/24 dev qr-af2ae2b0-8c  proto kernel  scope link  src 100.0.0.1

  4) I clear the extra route that I added by using 
  neutron router-update router1 --no-routes
  The I get the following routes in router namespace:
  # ip r
  100.0.0.0/24 dev qr-af2ae2b0-8c  proto kernel  scope link  src 100.0.0.1

  As a result, I lost the connection to 100.0.1.0/24.

  
  I think neutron should prevent user from adding extra routes to the existing 
cidrs that have been connected to router(as interface or as gateway).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533904] [NEW] Disassociate floatingip in HA router might get error

2016-01-13 Thread Hong Hui Xiao
Public bug reported:

With master code, I try to disassociate floatingip in HA router. But see
following error in log.

2016-01-14 00:19:52.708 ERROR neutron.agent.linux.utils [-] Exit code:
2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Cannot assign requested
address

2016-01-14 00:19:52.710 ERROR neutron.agent.l3.router_info [-] Failed to 
process floating IPs.
2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info Traceback (most 
recent call last):
2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 672, in 
process_external
2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info fip_statuses = 
self.configure_fip_addresses(interface_name)
2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 251, in 
configure_fip_addresses
2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info raise 
n_exc.FloatingIpSetupException('L3 agent failure to setup '
2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info 
FloatingIpSetupException: L3 agent failure to setup floating IPs
2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info

** Affects: neutron
 Importance: Undecided
     Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533904

Title:
  Disassociate floatingip in HA router might get error

Status in neutron:
  New

Bug description:
  With master code, I try to disassociate floatingip in HA router. But
  see following error in log.

  2016-01-14 00:19:52.708 ERROR neutron.agent.linux.utils [-] Exit code:
  2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Cannot assign
  requested address

  2016-01-14 00:19:52.710 ERROR neutron.agent.l3.router_info [-] Failed to 
process floating IPs.
  2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info Traceback (most 
recent call last):
  2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 672, in 
process_external
  2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info fip_statuses = 
self.configure_fip_addresses(interface_name)
  2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 251, in 
configure_fip_addresses
  2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info raise 
n_exc.FloatingIpSetupException('L3 agent failure to setup '
  2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info 
FloatingIpSetupException: L3 agent failure to setup floating IPs
  2016-01-14 00:19:52.710 TRACE neutron.agent.l3.router_info

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531412] Re: can not boot instance with 31 prefix subnet

2016-01-06 Thread Hong Hui Xiao
if the prefixlen is 31, there will be 2 addresses in the cidr, the first is 
X.X.X.0 in your case, which is a reserved ip address. 
With subnetpool, it will default use the first useful ip address as gateway ip 
address(X.X.X.1 in your case).

So, there is no useful ip address in the subnet as the error said: " No
fixed IP addresses available for network"

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531412

Title:
  can not boot instance with 31 prefix subnet

Status in neutron:
  Invalid

Bug description:
  [Summary]
  can not boot instance with 31 prefix subnet

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  can boot instance with 31 prefix subnet

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) create a 31 prefix subnet using subnetpool:
  root@45-59:/opt/stack/devstack# neutron subnetpool-show pool1
  +---+--+
  | Field | Value|
  +---+--+
  | address_scope_id  |  |
  | default_prefixlen | 27   |
  | default_quota |  |
  | id| bfc49547-7883-47d5-af43-1d7825ef6fbf |
  | ip_version| 4|
  | is_default| False|
  | max_prefixlen | 32   |
  | min_prefixlen | 8|
  | name  | pool1|
  | prefixes  | 100.0.0.0/24 |
  | shared| False|
  | tenant_id | 72a70635fa0c42a2bcba67edd760d516 |
  +---+--+
  root@45-59:/opt/stack/devstack# neutron subnet-create --subnetpool pool1 
  --prefixlen 31 net2 --name sub2
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 100.0.0.0/31 |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 100.0.0.1|
  | host_routes   |  |
  | id| 0795544e-cf70-4cff-9387-d3dc23bcbaf5 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | sub2 |
  | network_id| 362bbef0-9a1a-4d94-878d-5d503e92b149 |
  | subnetpool_id | bfc49547-7883-47d5-af43-1d7825ef6fbf |
  | tenant_id | 72a70635fa0c42a2bcba67edd760d516 |
  +---+--+
  root@45-59:/opt/stack/devstack# 


  2) boot instance with 31 prefix subnet, failed:

  boot cmd:
  nova boot --flavor 1 --image  cirros-0.3.4-x86_64-uec  --availability-zone
   nova --nic net-id=362bbef0-9a1a-4d94-878d-5d503e92b149  inst1

  instance boot error:
  {"message": "Build of instance 2d0b313c-f8b8-414b-8d42-f7ea0fc58865 aborted:
   Failed to allocate the network(s) with error No fixed IP addresses available
  for network: 362bbef0-9a1a-4d94-878d-5d503e92b149, not rescheduling.", 
"code": 500
  , "details": "  File \"/opt/stack/nova/nova/compute/manager.py\", line 1914, 
  in _do_build_and_run_instance

  3) boot instance with 31 prefix subnet with fixed ip, failed:

  boot cmd:
  nova  boot  --flavor 1 --image  cirros-0.3.4-x86_64-uec  --availability-zone
  nova --nic net-id=362bbef0-9a1a-4d94-878d-5d503e92b149,v4-fixed-ip=100.0.0.1  
inst3

  instance boot error:
  {"message": "No valid host was found. There are not enough hosts available."
  , "code": 500, "details": "  File 
\"/opt/stack/nova/nova/conductor/manager.py\"
  , line 363, in build_instances

  Note: for 31 prefix subnet, there are only 2 available addresses, no network 
address
   and no broadcast address, the 2 addresses should be used to boot instance

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help  

[Yahoo-eng-team] [Bug 1531418] Re: there is useless 'u' in the showing of "neutron subnetpool-list"

2016-01-06 Thread Hong Hui Xiao
According to [1], it should be a neutron-client issue.


[1] http://paste.openstack.org/show/483111/

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531418

Title:
  there is useless 'u' in the showing of "neutron subnetpool-list"

Status in python-neutronclient:
  New

Bug description:
  [Summary]
  there is useless 'u' in the showing of "neutron subnetpool-list"

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  there is no useless 'u' in the showing of "neutron subnetpool-list"

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) there is useless 'u' in the showing of "neutron subnetpool-list":
  root@45-59:/opt/stack/devstack# neutron subnetpool-list
  
+--+---+---+---+--+
  | id   | name  | prefixes  | 
default_prefixlen | address_scope_id |
  
+--+---+---+---+--+
  | 05c6285e-bc5b-4d0d-bbe3-dbb08a0612e1 | pool1 | [u'100.0.0.0/24'] | 8
 |  |
  
+--+---+---+---+--+
  root@45-59:/opt/stack/devstack# 

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1531418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527145] Re: when port updated on one compute node, ipset in other compute nodes did not be refreshed

2015-12-23 Thread Hong Hui Xiao
*** This bug is a duplicate of bug 1448022 ***
https://bugs.launchpad.net/bugs/1448022

It is the same bug, duplicate them will be more appropriate.

** This bug has been marked a duplicate of bug 1448022
   update port IP, ipset member can't be updated in another host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527145

Title:
  when port updated on one compute node, ipset in other compute nodes
  did not be refreshed

Status in neutron:
  Fix Released

Bug description:
  I found this problem in Kilo release, but I'm not sure if it still
  exists in master branch.

  =
  Reproduce steps:
  =
  (Three compute nodes,  ovs agent,   security group with ipset enabled)
  1. Launch VM1(1.1.1.1) on Compute Node1 with default security group
  2. Launch VM2(1.1.1.2) on Compute Node2 with default security group
  3. Launch VM3(1.1.1.3) on Compute Node3 with default security group
  4. Change VM1's ip address to 1.1.1.10 and port-update add allowed address 
pair 1.1.1.10

  After these operations, I found that ipset in Compute Node1 added
  member 1.1.1.10, but ipset in Compute Node2 and Compute Node3 did not,
  so that VM1 ping VM2 and VM3 failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528235] [NEW] [REF]Add weight to l3 agent

2015-12-21 Thread Hong Hui Xiao
Public bug reported:

[Existing problem]
Currently, neutron will treat all l3 agent as the same. The default 
LeastRoutersScheduler will chose l3 agent, based on the load of l3 agents. But 
the hosts of l3 agents may be different. Some hosts may have better CPU, larger 
memory and better network bandwidth. Admins/Operators may want the host with 
higher performance to take more load.

[Proposal]
Add a configuration to l3_agent.ini to represent the weight of l3 agent. 
Admins/Operators can set higher weight to the l3 agent with higher performance. 
The l3 agent with higher weight will have higher chance to be selected by the 
L3Scheduler.

[Benefits]
Neutron can provide a better scheduling by leveraging the difference of 
performance of l3 agents' hosts

[What is the enhancement?]
Configuration file changes. 
Code change in the L3 scheduler.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: rfe

** Tags added: rfe

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528235

Title:
  [REF]Add weight to l3 agent

Status in neutron:
  New

Bug description:
  [Existing problem]
  Currently, neutron will treat all l3 agent as the same. The default 
LeastRoutersScheduler will chose l3 agent, based on the load of l3 agents. But 
the hosts of l3 agents may be different. Some hosts may have better CPU, larger 
memory and better network bandwidth. Admins/Operators may want the host with 
higher performance to take more load.

  [Proposal]
  Add a configuration to l3_agent.ini to represent the weight of l3 agent. 
Admins/Operators can set higher weight to the l3 agent with higher performance. 
The l3 agent with higher weight will have higher chance to be selected by the 
L3Scheduler.

  [Benefits]
  Neutron can provide a better scheduling by leveraging the difference of 
performance of l3 agents' hosts

  [What is the enhancement?]
  Configuration file changes. 
  Code change in the L3 scheduler.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527113] [NEW] Code duplication in neutron/api/v2attribute.py

2015-12-17 Thread Hong Hui Xiao
Public bug reported:

Found this when reviewing code.

neutron/api/v2attribute.py may validate list of items, there are
different functions to validate list of different items. The code are
mostly duplicated in [1-3]. And more can be expected in future.



[1] 
https://github.com/openstack/neutron/blob/b8d281a303ca12440aebb55895ebb192d25cecf8/neutron/api/v2/attributes.py#L121
[2] 
https://github.com/openstack/neutron/blob/b8d281a303ca12440aebb55895ebb192d25cecf8/neutron/api/v2/attributes.py#L350
[3] 
https://github.com/openstack/neutron/blob/b8d281a303ca12440aebb55895ebb192d25cecf8/neutron/api/v2/attributes.py#L411

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527113

Title:
  Code duplication in neutron/api/v2attribute.py

Status in neutron:
  New

Bug description:
  Found this when reviewing code.

  neutron/api/v2attribute.py may validate list of items, there are
  different functions to validate list of different items. The code are
  mostly duplicated in [1-3]. And more can be expected in future.




  [1] 
https://github.com/openstack/neutron/blob/b8d281a303ca12440aebb55895ebb192d25cecf8/neutron/api/v2/attributes.py#L121
  [2] 
https://github.com/openstack/neutron/blob/b8d281a303ca12440aebb55895ebb192d25cecf8/neutron/api/v2/attributes.py#L350
  [3] 
https://github.com/openstack/neutron/blob/b8d281a303ca12440aebb55895ebb192d25cecf8/neutron/api/v2/attributes.py#L411

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522677] [NEW] AZAwareWeightScheduler is not totally based on weight

2015-12-03 Thread Hong Hui Xiao
Public bug reported:

The AZ(available zone) for network has been enable with the merging of
[1]. I try in local devstack with latest code.

1) I deploy 3 dhcp-agent in 3 AZs (nova1, nova2, nova3).

2) set the dhcp_agent_per_network=1, don't set
default_availability_zones, and set network_scheduler_driver =
neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler

3) create 10 networks without specifying the availability_zone_hints

10 networks all go to nova1. It is not a reasonable result.

[1] https://review.openstack.org/#/c/204436/

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Description changed:

  The AZ(available zone) for network has been enable with the merging of
  [1]. I try in local devstack with latest code.
  
- 1) I deploy 3 dhcp-agent in 3 AZs (nova1, nova2, nova3). 
- 2) set the dhcp_agent_per_network=1, don't set default_availability_zones
+ 1) I deploy 3 dhcp-agent in 3 AZs (nova1, nova2, nova3).
+ 
+ 2) set the dhcp_agent_per_network=1, don't set
+ default_availability_zones, and set network_scheduler_driver =
+ neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler
+ 
  3) create 10 networks without specifying the availability_zone_hints
  
  10 networks all go to nova1. It is not a reasonable result.
  
  [1] https://review.openstack.org/#/c/204436/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522677

Title:
  AZAwareWeightScheduler is not totally based on weight

Status in neutron:
  New

Bug description:
  The AZ(available zone) for network has been enable with the merging of
  [1]. I try in local devstack with latest code.

  1) I deploy 3 dhcp-agent in 3 AZs (nova1, nova2, nova3).

  2) set the dhcp_agent_per_network=1, don't set
  default_availability_zones, and set network_scheduler_driver =
  neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler

  3) create 10 networks without specifying the availability_zone_hints

  10 networks all go to nova1. It is not a reasonable result.

  [1] https://review.openstack.org/#/c/204436/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520775] [NEW] Update the gateway of external net won't affect router

2015-11-28 Thread Hong Hui Xiao
Public bug reported:

I found this one when build up a test env.

Steps to reproduce:
1) I create a set of external network, internal network and router. The 
external network has gateway ip in its subnet.
2) connect the external, internal network and router, by using 
router-gateway-set, router-interface-add.
3) Then I realize my physical network doesn't have a gateway. So I update the 
subnet of external network with --no-gateway.
4) I can't see the default route be deleted in router namespace, even if I 
restart l3-agent.

Then I try another way:
a) The same as 1), except that the subnet of external network don't have 
gateway ip when creation.
b) The same as 2)
c) I update the subnet of  external network with --gateway-ip AA:BB:CC:DD .
d) I can't see the default route be added to router namespace, until I restart 
l3-agent.

I try it in legacy router and DVR, they both have this problem, and I
believe HA router will have this problem.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520775

Title:
  Update the gateway of external net won't affect router

Status in neutron:
  New

Bug description:
  I found this one when build up a test env.

  Steps to reproduce:
  1) I create a set of external network, internal network and router. The 
external network has gateway ip in its subnet.
  2) connect the external, internal network and router, by using 
router-gateway-set, router-interface-add.
  3) Then I realize my physical network doesn't have a gateway. So I update the 
subnet of external network with --no-gateway.
  4) I can't see the default route be deleted in router namespace, even if I 
restart l3-agent.

  Then I try another way:
  a) The same as 1), except that the subnet of external network don't have 
gateway ip when creation.
  b) The same as 2)
  c) I update the subnet of  external network with --gateway-ip AA:BB:CC:DD .
  d) I can't see the default route be added to router namespace, until I 
restart l3-agent.

  I try it in legacy router and DVR, they both have this problem, and I
  believe HA router will have this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1520775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518819] [NEW] Can't specify gateway when creating subnet with subnetpool

2015-11-22 Thread Hong Hui Xiao
Public bug reported:

When creating a subnet by using the subnetpool(without specifying cidr), I 
can't specify the gateway. Either using --no-gateway or --gateway, the newly 
created subnet will have the first ip in the pool as the the gateway ip.
Looks like the code don't consider gateway ip at [1], and always set the first 
ip in pool as gateway ip in [2].

I fount this when creating an external network, which don't have a
gateway indeed. The unnecessary gateway ip will create meaningless
default route in router.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/ipam/requests.py#L290
[2] 
https://github.com/openstack/neutron/blob/master/neutron/ipam/subnet_alloc.py#L126

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518819

Title:
  Can't specify gateway when creating subnet with subnetpool

Status in neutron:
  New

Bug description:
  When creating a subnet by using the subnetpool(without specifying cidr), I 
can't specify the gateway. Either using --no-gateway or --gateway, the newly 
created subnet will have the first ip in the pool as the the gateway ip.
  Looks like the code don't consider gateway ip at [1], and always set the 
first ip in pool as gateway ip in [2].

  I fount this when creating an external network, which don't have a
  gateway indeed. The unnecessary gateway ip will create meaningless
  default route in router.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/ipam/requests.py#L290
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/ipam/subnet_alloc.py#L126

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318528] Re: DHCP agent creates new instance of driver for each action

2015-11-15 Thread Hong Hui Xiao
I would set the status to opinion based on comment #15.

** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1318528

Title:
  DHCP agent creates new instance of driver for each action

Status in neutron:
  Opinion

Bug description:
  Working on rootwrap daemon [0] I've found out that DCHP agent asks for
  root_helper too often. [1] shows traceback for each place where
  get_root_helper is being called.

  It appeared that in [2] DHCP agent creates an instance of driver class
  for every single action it needs to run. That involves both lots of
  initialization code and very expensive dynamic import_object routine
  being run.

  [2] shows that the only thing that changes between driver instances is
  a network. I suggest we make network an argument for every action
  instead to avoid expensive dynamic driver instantiation.

  Links:

  [0] https://review.openstack.org/84667
  [1] 
http://logs.openstack.org/67/84667/20/check/check-tempest-dsvm-neutron/3a7768e/logs/screen-q-dhcp.txt.gz?level=INFO
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp_agent.py#L122

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1318528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515209] [NEW] HA router can't associate an external network without gateway ip

2015-11-11 Thread Hong Hui Xiao
Public bug reported:

When associate a HA router with an external network without gateway ip,
I will get the following error:

2015-11-11 03:23:44.599 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '8114b0c3-85e6-4b71-ab74-a0c1437882cd'
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 503, in 
_process_router_update
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 444, in 
_process_router_if_compatible
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._process_added_router(router)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 452, in 
_process_added_router
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent ri.process(self)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 387, in process
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/common/utils.py", line 366, in call
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent self.logger(e)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 197, in __exit__
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/common/utils.py", line 363, in call
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent return func(*args, 
**kwargs)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 694, in process
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self.process_external(agent)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 660, in 
process_external
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._process_external_gateway(ex_gw_port, agent.pd)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 569, in 
_process_external_gateway
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self.external_gateway_added(ex_gw_port, interface_name)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 356, in 
external_gateway_added
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._add_gateway_vip(ex_gw_port, interface_name)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 249, in 
_add_gateway_vip
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._add_default_gw_virtual_route(ex_gw_port, interface_name)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 205, in 
_add_default_gw_virtual_route
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
instance.virtual_routes.gateway_routes = default_gw_rts
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent UnboundLocalError: local 
variable 'instance' referenced before assignment


For HA router,  the default gw route should only be added when there is any 
gateway ip present.

https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L248

** Affects: neutron
 Importance: Undecided
     Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: l3-ha

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: l3-ha

** Description changed:

  When associate a HA router with an external network without gateway ip,
  I will get the following error:
  
- 2015-11-11 03:07:36.198 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '8114b0c3-85e6-4b71-ab74-a0c1437882cd'
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 503, in 
_process_router_update
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 446, in 
_process_router_if_compatible
- 2015-11-11 03:07:36.198 TRACE neutron.age

[Yahoo-eng-team] [Bug 1514144] [NEW] rpc should return empty list when no l3_plugin is present

2015-11-07 Thread Hong Hui Xiao
Public bug reported:

It is a trivial bug, maybe it is left from a history code change.

In [1], it should be a [] here, instead of {} according to the context.

[1]
https://github.com/openstack/neutron/blob/master/neutron/api/rpc/handlers/l3_rpc.py#L77

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514144

Title:
  rpc should return empty list when no l3_plugin is present

Status in neutron:
  In Progress

Bug description:
  It is a trivial bug, maybe it is left from a history code change.

  In [1], it should be a [] here, instead of {} according to the
  context.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/api/rpc/handlers/l3_rpc.py#L77

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296953] Re: --disable-snat on tenant router raises 404

2015-10-28 Thread Hong Hui Xiao
*** This bug is a duplicate of bug 1352907 ***
https://bugs.launchpad.net/bugs/1352907

More thinking about this bug. The inconsistency might be designed to be
this way according to the tempest. Allow user to create a basic
connection between router and external network, however, let admin to do
other things.

The original bug in the description is resolved by Bug #1352907, so mark
duplicated with it.

** This bug has been marked a duplicate of bug 1352907
   response of normal user update the "shared" property of network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296953

Title:
  --disable-snat on tenant router raises 404

Status in neutron:
  In Progress

Bug description:
  arosen@arosen-desktop:~/devstack$ neutron router-create aaa
  nCreated a new router:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | distributed   | False|
  | external_gateway_info |  |
  | id| add4d46b-5036-4a96-af7e-8ceb44f9ab3d |
  | name  | aaa  |
  | routes|  |
  | status| ACTIVE   |
  | tenant_id | 4ec9de7eae7445719e8f67f2f9d78aae |
  +---+--+
  arosen@arosen-desktop:~/devstack$ neutron router-gateway-set --disable-snat  
aaa public 
  The resource could not be found.

  
  2014-03-24 14:06:12.444 DEBUG neutron.policy 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] Failed policy check for 'update_router' from 
(pid=7068) enforce /opt/stack/neutron/neutron/policy.py:381
  2014-03-24 14:06:12.444 ERROR neutron.api.v2.resource 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] update failed
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 494, in update
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPNotFound(msg)
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource HTTPNotFound: The 
resource could not be found.
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource 
  2014-03-24 14:06:12.445 INFO neutron.wsgi 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] 10.24.114.91 - - [24/Mar/2014 14:06:12] "PUT 
/v2.0/routers/add4d46b-5036-4a96-af7e-8ceb44f9ab3d.json HTTP/1.1" 404 248 
0.039626


  In the code we do:

  try:
  policy.enforce(request.context,
 action,
 orig_obj)
  except exceptions.PolicyNotAuthorized:
  # To avoid giving away information, pretend that it
  # doesn't exist
  msg = _('The resource could not be found.')
  raise webob.exc.HTTPNotFound(msg)   


  it would be nice if we were smarter about this an raise not authorized
  instead of not found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1296953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507950] [NEW] The metadata_proxy for a network will never be deleted even if it is not needed.

2015-10-20 Thread Hong Hui Xiao
Public bug reported:

Find this issue in a large scale test. Steps to reproduce:
(1) I have about 1000 networks and set enable_isolated_metadata=True firstly.
(2) But then I find the metadata_proxy process is too many, and I want to  
disable it. So I set the enable_isolated_metadata=False.
(3) restart dhcp-agent
(4) The metdata_proxy are still there.

To eliminate the metadata_proxy process for networks, it looks like that
I can delete the networks or kill the metadata_proxy process manually(or
just restart the host).  And, obviously, I want to keep the networks.

neutron-dhcp-agent should try to kill the useless metadata_proxy to keep
the neutron host clean and with less burden.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Description changed:

  Find this issue in a large scale test. Steps to reproduce:
- (1) I have about 1000 networks and set enable_isolated_metadata=True firstly. 
+ (1) I have about 1000 networks and set enable_isolated_metadata=True firstly.
  (2) But then I find the metadata_proxy process is too many, and I want to  
disable it. So I set the enable_isolated_metadata=False.
- (3) restart dhcp-agent 
+ (3) restart dhcp-agent
  (4) The metdata_proxy are still there.
  
- To eliminate the metadata_proxy process for networks, It looks like that
+ To eliminate the metadata_proxy process for networks, it looks like that
  I can delete the networks or kill the metadata_proxy process manually(or
- just restart the host.  And, obviously, I want to keep the networks.
+ just restart the host).  And, obviously, I want to keep the networks.
  
  neutron-dhcp-agent should try to kill the useless metadata_proxy to keep
  the neutron host clean and with less burden.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507950

Title:
  The metadata_proxy for a network will never be deleted even if it is
  not needed.

Status in neutron:
  New

Bug description:
  Find this issue in a large scale test. Steps to reproduce:
  (1) I have about 1000 networks and set enable_isolated_metadata=True firstly.
  (2) But then I find the metadata_proxy process is too many, and I want to  
disable it. So I set the enable_isolated_metadata=False.
  (3) restart dhcp-agent
  (4) The metdata_proxy are still there.

  To eliminate the metadata_proxy process for networks, it looks like
  that I can delete the networks or kill the metadata_proxy process
  manually(or just restart the host).  And, obviously, I want to keep
  the networks.

  neutron-dhcp-agent should try to kill the useless metadata_proxy to
  keep the neutron host clean and with less burden.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506934] [NEW] The exception type is wrong and makes the except block not work

2015-10-16 Thread Hong Hui Xiao
Public bug reported:

With many ha routers, I restart the l3-agent.  And find error in the
log:

2015-10-14 22:24:19.640 31246 INFO eventlet.wsgi.server [-] Traceback (most 
recent call last):
  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 442, in 
handle_one_response
result = self.application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
return self.func(req, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 59, in 
__call__
self.enqueue(router_id, state)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 65, in 
enqueue
self.agent.enqueue_state_change(router_id, state)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 119, in 
enqueue_state_change
ri = self.router_info[router_id]
KeyError: 'aec00e20-ebe0-4979-858b-cb411dcd1bb6'

Checking code, and find that

def enqueue_state_change(self, router_id, state):
LOG.info(_LI('Router %(router_id)s transitioned to %(state)s'),
 {'router_id': router_id,
  'state': state})

try:
ri = self.router_info[router_id]
except AttributeError:
LOG.info(_LI('Router %s is not managed by this agent. It was '
 'possibly deleted concurrently.'), router_id)
return

KeyError should be expected here according to the context.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506934

Title:
  The exception type is wrong and makes the except block not work

Status in neutron:
  New

Bug description:
  With many ha routers, I restart the l3-agent.  And find error in the
  log:

  2015-10-14 22:24:19.640 31246 INFO eventlet.wsgi.server [-] Traceback (most 
recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 442, in 
handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  return self.func(req, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 59, in 
__call__
  self.enqueue(router_id, state)
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 65, in 
enqueue
  self.agent.enqueue_state_change(router_id, state)
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 119, 
in enqueue_state_change
  ri = self.router_info[router_id]
  KeyError: 'aec00e20-ebe0-4979-858b-cb411dcd1bb6'

  Checking code, and find that

  def enqueue_state_change(self, router_id, state):
  LOG.info(_LI('Router %(router_id)s transitioned to %(state)s'),
   {'router_id': router_id,
'state': state})

  try:
  ri = self.router_info[router_id]
  except AttributeError:
  LOG.info(_LI('Router %s is not managed by this agent. It was '
   'possibly deleted concurrently.'), router_id)
  return

  KeyError should be expected here according to the context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506289] [NEW] The first letter of error message should be capitalized

2015-10-14 Thread Hong Hui Xiao
Public bug reported:

Checking another problem and find this [1], the first letter should be
capitalized to keep readability and consistence.


def _respawn_action(self, service_id):
LOG.error(_LE("respawning %(service)s for uuid %(uuid)s"),
  {'service': service_id.service,
   'uuid': service_id.uuid})
self._monitored_processes[service_id].enable()


[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L250

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506289

Title:
  The first letter of error message should be capitalized

Status in neutron:
  New

Bug description:
  Checking another problem and find this [1], the first letter should be
  capitalized to keep readability and consistence.

  
  def _respawn_action(self, service_id):
  LOG.error(_LE("respawning %(service)s for uuid %(uuid)s"),
{'service': service_id.service,
 'uuid': service_id.uuid})
  self._monitored_processes[service_id].enable()

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L250

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498534] Re: The vms cant ping each other in different tenants(connected each other by the vpnaas) but the same openstack environment

2015-09-23 Thread Hong Hui Xiao
It looks like you have problem in using vpnaas, not a bug. 
You should look for guide, or something else, for example[1]

[1] https://wiki.openstack.org/wiki/Neutron/VPNaaS

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498534

Title:
  The vms cant ping each other in different tenants(connected each other
  by the vpnaas) but the same openstack environment

Status in neutron:
  Invalid

Bug description:

  setup:

  OS: ubuntu 14.04 based Juno

  1 controller + 1 network node + 2 nova computer node + 1 docker node

 vm 1-Router1(tenant1) ---router2(tenant2)|-vm2
  10.4/24   10.1/24  42.4/26   42.5/26 20.1/24| 20.4/24

|___vm3

  20.5/24
 
  Bring up one tunnel between two tenants in the same openstack enviroment 
based Juno.
  The vm1(10.1/24)could ping the router2 private network gw(20.1/24), but cant 
ping the vm2(20.4/24)
  This two vms located in differen computer node. 

  I try to capture the packets and found the the icmp request can go to
  the 20.1/24, but when I catpure the packets in vm2, it get nothing. No
  packets coming into vm2.

  And also I create another instance vm3 in tenant2 with the same
  subnets with vm2. And the vm2 could ping the vm3.

  So the issue is the vm2 could receive the packets coming from the vm3
  but cant receive the packets from vm1 after the vpn tunnel bring up.

  At last I try to bring up a small os cirros, but the result is the
  same.

  debug:
  root@network2:/var/log/neutron# ip netns
  qdhcp-30c3e9f5-afde-4723-b396-7aa6f754be52
  qdhcp-afcf5acb-2e26-4353-9cbe-0ab81a2354be
  qrouter-7ec6eb64-3ff8-4242-a2dd-a2076a1cdcf9
  qrouter-0f9e22b4-30f4-4f7d-8cd1-595f116a0e2e


  
  root@network2:/var/log/neutron# ip netns exec 
qrouter-7ec6eb64-3ff8-4242-a2dd-a2076a1cdcf9 ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  qg-b33c0f49-01 Link encap:Ethernet  HWaddr fa:16:3e:0a:c1:4d  
inet addr:10.130.42.5  Bcast:10.130.42.63  Mask:255.255.255.192
inet6 addr: fe80::f816:3eff:fe0a:c14d/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:1798 errors:0 dropped:0 overruns:0 frame:0
TX packets:487 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:155100 (155.1 KB)  TX bytes:53918 (53.9 KB)

  qr-01274858-78 Link encap:Ethernet  HWaddr fa:16:3e:72:7b:38  
inet addr:20.20.1.1  Bcast:20.20.1.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe72:7b38/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:120 errors:0 dropped:0 overruns:0 frame:0
TX packets:218 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:11004 (11.0 KB)  TX bytes:20152 (20.1 KB)

  
  root@network2:/var/log/neutron# 

  root@network2:/var/log/neutron# ip netns exec 
qrouter-7ec6eb64-3ff8-4242-a2dd-a2076a1cdcf9 tcpdump -i qr-01274858-78
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on qr-01274858-78, link-type EN10MB (Ethernet), capture size 65535 
bytes
  ^C09:13:34.748825 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1665, length 64
  09:13:35.748875 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1666, length 64
  09:13:36.748796 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1667, length 64
  09:13:37.748839 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1668, length 64
  09:13:38.748762 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1669, length 64
  09:13:39.748789 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1670, length 64 >> the traffic could go to the private network gw in router.
  
  
  root@network2:/var/log/neutron# ip netns exec 
qdhcp-afcf5acb-2e26-4353-9cbe-0ab81a2354be ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX 

[Yahoo-eng-team] [Bug 1493270] Re: The dependency of pbr in ryu does not match neutron

2015-09-16 Thread Hong Hui Xiao
Checked that ryu has updated its dependency of pbr, due to this patch
https://github.com/osrg/ryu/commit/992bf7318d06090cc96a21521dbeba62f148d079

Close this bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493270

Title:
  The dependency of pbr in ryu does not match neutron

Status in neutron:
  Fix Released

Bug description:
  I want to use neutron with latest code. In the [1], ryu was added as
  dependency for neutron. However, when I want to install ryu.  I got
  this error.

  [root@test]# pip install ryu
  Downloading/unpacking ryu
Downloading ryu-3.25.tar.gz (1.3MB): 1.3MB downloaded
Running setup.py egg_info for package ryu
  Traceback (most recent call last):
File "", line 16, in 
File "/tmp/pip_build_root/ryu/setup.py", line 30, in 
  pbr=True)
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
  _setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, 
in __init__
  self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, 
in fetch_build_eggs
  parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 630, in 
resolve
  raise VersionConflict(dist,req) # XXX put more info here
  pkg_resources.VersionConflict: (pbr 1.6.0 
(/usr/lib/python2.7/site-packages), Requirement.parse('pbr<1.0'))

  And I can find from [2] that ryu will need pbr < 1.0
  But in my env, the pbr was installed with a newer version. According to [3]. 


  [1] https://review.openstack.org/#/c/153946/136/requirements.txt
  [2] https://github.com/osrg/ryu/blob/master/setup.py#L29
  [3] https://github.com/openstack/neutron/blob/master/requirements.txt#L4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495742] Re: [Neutron][Improvement]Neutron can ask user to see the help file in case the user passes wrong arguments in CLI

2015-09-14 Thread Hong Hui Xiao
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495742

Title:
  [Neutron][Improvement]Neutron can ask user to see the help file in
  case the user passes wrong arguments in CLI

Status in python-neutronclient:
  New

Bug description:
  Openstack CLIs have the support of displaying help file for all components, 
specifically neutron.
  However, for a new user/developer, understanding the help file is important.

  When we pass incorrect attributes to a nova client, we get the following 
output:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ nova agent-delete
  usage: nova agent-delete 
  error: too few arguments
  Try 'nova help agent-delete' for more information.
  ##

  The last line gives a useful information to the new user/developer as to what 
he/she can do to find out more information.
  Something like this can be added to the neutron-client as well

  Current output of neutron client:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
  usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width ]
 [--prefix PREFIX] [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--name NAME]
 [--description DESCRIPTION] [--shared]
 [--admin-state-down] [--router ROUTER]
 POLICY
  neutron firewall-create: error: too few arguments
  ##

  Expected output:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
  usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width ]
 [--prefix PREFIX] [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--name NAME]
 [--description DESCRIPTION] [--shared]
 [--admin-state-down] [--router ROUTER]
 POLICY
  neutron firewall-create: error: too few arguments
  Try 'neutron help firewall-delete' for more information.
  ##

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1495742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495430] [NEW] delete lbaasv2 can't delete lbaas namespace automatically.

2015-09-14 Thread Hong Hui Xiao
Public bug reported:

Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look back 
to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
The behavior is not consistent, namespace should be deleted from deleting 
listener too.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Description changed:

  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
+ The behavior is not consistent, namespace should be deleted from deleting 
listener too.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495430

Title:
  delete lbaasv2 can't delete lbaas namespace automatically.

Status in neutron:
  New

Bug description:
  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
  The behavior is not consistent, namespace should be deleted from deleting 
listener too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494647] Re: attaching a duplicated network to a server succeed,but creating a server with duplicated networks fail

2015-09-12 Thread Hong Hui Xiao
I have verified that kilo code can't create server with duplicated network. But 
I also verified that this problem is not in L's latest code.
And, this should be a nova constraint, not in neutron. So, I will close this as 
invalid.

Besides, I am wondering what's the actual usercase that one server will
have 2(or more) nics in one network.

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494647

Title:
  attaching a duplicated network to a server succeed,but creating a
  server with duplicated networks fail

Status in neutron:
  Invalid

Bug description:
  kilo
  1.creating a server with duplicated networks fail

  [root@host]# neutron net-list
  | 6185c860-0a91-4139-a0a0-5a5bb94aebd8 | net_physnet1_2 | 
649f0110-a6f9-4cca-a874-6933cabac779 102.1.1.0/24   |

  [root@host]# nova boot --flavor m1.tiny --image 
49a15651-f664-4603-8548-772a25cd5c8b --nic 
net-id=6185c860-0a91-4139-a0a0-5a5bb94aebd8 --nic 
net-id=6185c860-0a91-4139-a0a0-5a5bb94aebd8 server1
  ERROR (BadRequest): Network 6185c860-0a91-4139-a0a0-5a5bb94aebd8 is 
duplicated. (HTTP 400) (Request-ID: req-6a161abf-e996-4f60-8304-ec566d803516)

  2.creating a server with a network succeed

  [root@host]# nova boot --flavor m1.tiny --image 
49a15651-f664-4603-8548-772a25cd5c8b --nic 
net-id=6185c860-0a91-4139-a0a0-5a5bb94aebd8 server1
  
+--+-+
  | Property | Value
   |
  
+--+-+
  | OS-DCF:diskConfig| MANUAL   
   |
  | OS-EXT-AZ:availability_zone  | nova 
   |
  | OS-EXT-SRV-ATTR:host | -
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
   |
  | OS-EXT-SRV-ATTR:instance_name| instance-003c
   |
  | OS-EXT-STS:power_state   | 0
   |
  | OS-EXT-STS:task_state| scheduling   
   |
  | OS-EXT-STS:vm_state  | building 
   |
  | OS-SRV-USG:launched_at   | -
   |
  | OS-SRV-USG:terminated_at | -
   |
  | accessIPv4   |  
   |
  | accessIPv6   |  
   |
  | adminPass| b93q9Yu2dwtC 
   |
  | config_drive |  
   |
  | created  | 2015-09-11T07:59:01Z 
   |
  | flavor   | m1.tiny (1)  
   |
  | hostId   |  
   |
  | id   | 2bf1c83f-382c-4d37-a3a2-538128436f17 
   |
  | image| cirros-0.3.0-x86_64-disk 
(49a15651-f664-4603-8548-772a25cd5c8b) |
  | key_name | -
   |
  | metadata | {}   
   |
  | name | server1  
   |
  | os-extended-volumes:volumes_attached | []   
   |
  | progress | 0
   |
  | security_groups  | default  
   |
  | status   | BUILD
   |
  | tenant_id| ef2eeb52ef0b472f823b622737874523 
   |
  | updated  | 2015-09-11T07:59:01Z 
   |
  | u

[Yahoo-eng-team] [Bug 1493270] [NEW] The dependency of pbr in ryu does not match neutron

2015-09-08 Thread Hong Hui Xiao
Public bug reported:

I want to use neutron with latest code. In the [1], ryu was added as
dependency for neutron. However, when I want to install ryu.  I got this
error.

[root@test]# pip install ryu
Downloading/unpacking ryu
  Downloading ryu-3.25.tar.gz (1.3MB): 1.3MB downloaded
  Running setup.py egg_info for package ryu
Traceback (most recent call last):
  File "", line 16, in 
  File "/tmp/pip_build_root/ryu/setup.py", line 30, in 
pbr=True)
  File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
  File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in 
__init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
  File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in 
fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 630, in 
resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (pbr 1.6.0 
(/usr/lib/python2.7/site-packages), Requirement.parse('pbr<1.0'))

And I can find from [2] that ryu will need pbr < 1.0
But in my env, the pbr was installed with a newer version. According to [3]. 


[1] https://review.openstack.org/#/c/153946/136/requirements.txt
[2] https://github.com/osrg/ryu/blob/master/setup.py#L29
[3] https://github.com/openstack/neutron/blob/master/requirements.txt#L4

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493270

Title:
  The dependency of pbr in ryu does not match neutron

Status in neutron:
  New

Bug description:
  I want to use neutron with latest code. In the [1], ryu was added as
  dependency for neutron. However, when I want to install ryu.  I got
  this error.

  [root@test]# pip install ryu
  Downloading/unpacking ryu
Downloading ryu-3.25.tar.gz (1.3MB): 1.3MB downloaded
Running setup.py egg_info for package ryu
  Traceback (most recent call last):
File "", line 16, in 
File "/tmp/pip_build_root/ryu/setup.py", line 30, in 
  pbr=True)
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
  _setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, 
in __init__
  self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, 
in fetch_build_eggs
  parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 630, in 
resolve
  raise VersionConflict(dist,req) # XXX put more info here
  pkg_resources.VersionConflict: (pbr 1.6.0 
(/usr/lib/python2.7/site-packages), Requirement.parse('pbr<1.0'))

  And I can find from [2] that ryu will need pbr < 1.0
  But in my env, the pbr was installed with a newer version. According to [3]. 


  [1] https://review.openstack.org/#/c/153946/136/requirements.txt
  [2] https://github.com/osrg/ryu/blob/master/setup.py#L29
  [3] https://github.com/openstack/neutron/blob/master/requirements.txt#L4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444753] Re: Tenant_Id needs valiation for Neutron-LBaas-API

2015-04-15 Thread Hong Hui Xiao
I happened to have some investigate about this kind of issues.  From the
bug history, this issue will be close as invalid. tenant_id can only be
passed in by admin. Neutron would think admin knows what he is doing,
and would allow admin to do anything.

Pls see the following bugs as reference.

https://bugs.launchpad.net/neutron/+bug/1200585
https://bugs.launchpad.net/neutron/+bug/1185206
https://bugs.launchpad.net/neutron/+bug/1067620
https://bugs.launchpad.net/neutron/+bug/1290408
https://bugs.launchpad.net/neutron/+bug/1338885
https://bugs.launchpad.net/neutron/+bug/1398992
https://bugs.launchpad.net/neutron/+bug/1440705
https://bugs.launchpad.net/neutron/+bug/1441373
https://bugs.launchpad.net/neutron/+bug/1440700

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444753

Title:
  Tenant_Id needs valiation for Neutron-LBaas-API

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Based on the description in the following link
  https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Create_a_Health_Monitor

  tenant_id: only required if the caller has an admin role and wants to
  create a Health Monitor for another tenant.

  while in the tempest Admin API test, we found that when we pass an
  empty tenant id  or an invalid tenant id , we are still able to create
  services like health monitor,etc.  We believe that this is a bug,
  since tenantid is a UUID, it should not be invalid or empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404471] [NEW] Can't associate a health monitor for a neutron load balance pool

2014-12-20 Thread Hong Hui Xiao
Public bug reported:

I create a load balance pool and a health monitor from horizon. Then I
want to associate the health monitor to the pool.  After click the
Associate Monitor button, I can see the pop-up dialog. But in the
Select of the dialog, I can't select the newly created health monitor.

If I do the same operation from cli, everything works fine.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1404471

Title:
  Can't associate a health monitor for a neutron load balance pool

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I create a load balance pool and a health monitor from horizon. Then I
  want to associate the health monitor to the pool.  After click the
  Associate Monitor button, I can see the pop-up dialog. But in the
  Select of the dialog, I can't select the newly created health monitor.

  If I do the same operation from cli, everything works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1404471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389246] [NEW] port/subnet tenant id and network tenant id could be inconsistent

2014-11-04 Thread Hong Hui Xiao
Public bug reported:

Steps to reproduce:
1) Create a network with tenant id A.

2)
2.1) Create a port/subnet in that network, and specify tenant B as the 
--tenant-id.
OR
2.2) User authenticates with tenant B. Then user creates a port/subnet in 
that network, without specifying tenant id.

Problem:
Now we have a network with port/subnet in it. But they are not in one tenant. 

I checked the neutron code, and found that there is no restriction of tenant 
for creating port/subnet. 
If a network has port/subnet in it, it is meaningful to have them all in one 
tenant.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389246

Title:
  port/subnet tenant id and network tenant id could be inconsistent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Steps to reproduce:
  1) Create a network with tenant id A.

  2)
  2.1) Create a port/subnet in that network, and specify tenant B as the 
--tenant-id.
  OR
  2.2) User authenticates with tenant B. Then user creates a port/subnet in 
that network, without specifying tenant id.

  Problem:
  Now we have a network with port/subnet in it. But they are not in one tenant. 

  I checked the neutron code, and found that there is no restriction of tenant 
for creating port/subnet. 
  If a network has port/subnet in it, it is meaningful to have them all in one 
tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp