[Yahoo-eng-team] [Bug 1738337] [NEW] [neutron-db] neutron-server report error "UPDATE statement on table 'standardattributes' expected to update 1 row(s); 0 were matched'

2017-12-14 Thread Zachary Ma
Public bug reported:

4124:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api 
[req-3a38738f-efbf-45b0-ae65-3af84f6aae2c - - - - -] DB exceeded retry limit.: 
StaleDataError: UPDATE statement on table 'standardattributes' expected to 
update 1 row(s); 0 were matched.
4125:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api Traceback (most recent 
call last):
4126:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper
4127:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api return f(*args, 
**kwargs)
4128:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 128, in wrapped
4129:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api LOG.debug("Retry 
wrapper got retriable exception: %s", e)
4130:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
4131:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api self.force_reraise()
4132:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
4133:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api 
six.reraise(self.type_, self.value, self.tb)
4134:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in wrapped
4135:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api return f(*dup_args, 
**dup_kwargs)
4136:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 1702, in 
update_port_statuses
4137:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api context, 
port_dbs_by_id[port_id], status, host)
4138:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 1714, in 
_safe_update_individual_port_db_status
4139:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api ectx.reraise = 
bool(db.get_port(context, port_id))
4140:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
4141:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api self.force_reraise()
4142:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
4143:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api 
six.reraise(self.type_, self.value, self.tb)
4144:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 1710, in 
_safe_update_individual_port_db_status
4145:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api context, port, status, 
host)
4146:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 1769, in 
_update_individual_port_db_status
4147:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api levels = 
db.get_binding_levels(context, port_id, host)
4148:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
979, in wrapper
4149:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api return fn(*args, 
**kwargs)
4150:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/db.py", line 100, in 
get_binding_levels
4151:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api 
order_by(models.PortBindingLevel.level).
4152:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2703, in all
4153:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api return list(self)
4154:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2854, in 
__iter__
4155:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api 
self.session._autoflush()
4156:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1397, in 
_autoflush
4157:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api self.flush()
4158:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2171, in 
flush
4159:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api self._flush(objects)
4160:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2291, in 
_flush
4161:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api 
transaction.rollback(_capture_exception=True)
4162:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, 
in __exit__
4163:2017-09-19 17:10:15.211 10600 ERROR oslo_db.api 
compat.reraise(exc_type, exc_value, 

[Yahoo-eng-team] [Bug 1728479] Re: some security-group rules will be covered.

2017-11-22 Thread Zachary Ma
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728479

Title:
  some security-group rules will be covered.

Status in neutron:
  Fix Released

Bug description:
  1. create security-group anquanzu01, anquanzu02
  2. create vm1 with anquanzu01, anquanzu02, create vm2 with anquanzu02.
  3. vm1 can ping vm2 well, but vm2 can not ping vm1.

  anquanzu01, anquanzu02 are as follows:
   
  [root@172e18e211e96 ~]# neutron security-group-show anquanzu01
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--++
  | Field| Value
  |
  
+--++
  | created_at   | 2017-10-19T04:14:01Z 
  |
  | description  |  
  |
  | id   | b089348a-f939-43f8-bdd2-d7b54376f640 
  |
  | name | anquanzu01   
  |
  | project_id   | 2acab64182334292a9bf5f3cdd5b3428 
  |
  | revision_number  | 6
  |
  | security_group_rules | {
  |
  |  |  "remote_group_id": null,
  |
  |  |  "direction": "ingress", 
  |
  |  |  "protocol": "icmp", 
  |
  |  |  "description": "",  
  |
  |  |  "tags": [], 
  |
  |  |  "ethertype": "IPv4",
  |
  |  |  "remote_ip_prefix": "0.0.0.0/0",
  |
  |  |  "port_range_max": null, 
  |
  |  |  "updated_at": "2017-10-19T04:26:01Z",   
  |
  |  |  "security_group_id": 
"b089348a-f939-43f8-bdd2-d7b54376f640",  |
  |  |  "port_range_min": null, 
  |
  |  |  "revision_number": 0,   
  |
  |  |  "tenant_id": 
"2acab64182334292a9bf5f3cdd5b3428",  |
  |  |  "created_at": "2017-10-19T04:26:01Z",   
  |
  |  |  "project_id": 
"2acab64182334292a9bf5f3cdd5b3428", |
  |  |  "id": "1b7a4a06-e762-487a-9776-0d9d781f537c"
  |
  |  | }
  |
  |  | {
  |
  |  |  "remote_group_id": null,
  |
  |  |  "direction": "egress",  
  |
  |  |  "protocol": null,   
  |
  |  |  "description": null,
  |
  |  |  "tags": [], 
  |
  |  |  "ethertype": "IPv6",
  |
  |  |  "remote_ip_prefix": null,   
  |
  |  |  "port_range_max": null, 
  |
  |  |  "updated_at": "2017-10-19T04:14:01Z",   
  |
  |  |  "security_group_id": 
"b089348a-f939-43f8-bdd2-d7b54376f640",  |
  |  |  "port_range_min": null, 
  |
  |  |  "revision_number": 0,   
  |
  |  |  "tenant_id": 
"2acab64182334292a9bf5f3cdd5b3428",  |
  |  |  "created_at": "2017-10-19T04:14:01Z",   
  |
  |  |  "project_id": 
"2acab64182334292a9bf5f3cdd5b3428", |
  |  |  "id": "2e605e9b-9be1-4dd3-a86b-af7b95c476fb"
  |
  |

[Yahoo-eng-team] [Bug 1730896] [NEW] [qos] some wrong in config qos doc

2017-11-07 Thread Zachary Ma
Public bug reported:

1.It display Inaccurate info when created a QoS policy, rule etc.
2.There is a wrong cmd to update the security group rules.
$ openstack network qos rule set --max-kbps 2000 --max-burst-kbps 200 \
--ingress bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730896

Title:
  [qos] some wrong in config qos doc

Status in neutron:
  New

Bug description:
  1.It display Inaccurate info when created a QoS policy, rule etc.
  2.There is a wrong cmd to update the security group rules.
  $ openstack network qos rule set --max-kbps 2000 --max-burst-kbps 200 \
  --ingress bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730605] [NEW] neutron qos bindlimit by ovs is not accurate

2017-11-07 Thread Zachary Ma
Public bug reported:

1. openstack version: pike
2. neutron --version 6.5.0
3. ovs-vsctl (Open vSwitch) 2.7.2
4. iperf3-3.1.3-1.fc24.x86_64.rpm

egress bw-limiter is never accurate

ingress bw-limiter is also not accurte,
but using ovs 2.5, ingress is accurate !

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730605

Title:
  neutron qos bindlimit by ovs is not accurate

Status in neutron:
  New

Bug description:
  1. openstack version: pike
  2. neutron --version 6.5.0
  3. ovs-vsctl (Open vSwitch) 2.7.2
  4. iperf3-3.1.3-1.fc24.x86_64.rpm

  egress bw-limiter is never accurate

  ingress bw-limiter is also not accurte,
  but using ovs 2.5, ingress is accurate !

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728495] [NEW] When the remote security group and the remote IP are configured at the same time, the remote IP rule is ignored.

2017-10-29 Thread Zachary Ma
Public bug reported:

①Only configure the remote IP flow table as follows:
table=82, priority=70,ct_state=+est-rel-rpl,ip,reg5=0x38 actions=output:56 
table=82, priority=70,ct_state=+new-est,ip,reg5=0x38 
actions=ct(commit,zone=NXM_NX_REG6[0..15]),output:56 

②Configure the remote security group and the remote IP flow table as follows:
table=82, priority=70,ct_state=+new-est,ip,reg6=0xd,nw_src=10.10.10.13 
actions=conjunction(7,1/2),conjunction(11,1/2) 
table=82, priority=70,ct_state=+est-rel-rpl,ip,reg6=0xd,nw_src=10.10.10.13 
actions=conjunction(6,1/2),conjunction(10,1/2)
table=82, priority=70,ct_state=+est-rel-rpl,ip,reg5=0x38 
actions=conjunction(10,2/2) 
table=82, priority=70,ct_state=+new-est,ip,reg5=0x38 
actions=conjunction(11,2/2) 
table=82, priority=70,conj_id=10,ct_state=+est-rel-rpl,ip,reg5=0x38 
actions=output:56 
table=82, priority=70,conj_id=11,ct_state=+new-est,ip,reg5=0x38 
actions=ct(commit,zone=NXM_NX_REG6[0..15]),output:56

The rules for remote IP are ignored!

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728495

Title:
  When the remote security group and the remote IP are configured at the
  same time, the remote IP rule is ignored.

Status in neutron:
  New

Bug description:
  ①Only configure the remote IP flow table as follows:
  table=82, priority=70,ct_state=+est-rel-rpl,ip,reg5=0x38 actions=output:56
 
  table=82, priority=70,ct_state=+new-est,ip,reg5=0x38 
actions=ct(commit,zone=NXM_NX_REG6[0..15]),output:56 

  ②Configure the remote security group and the remote IP flow table as follows:
  table=82, priority=70,ct_state=+new-est,ip,reg6=0xd,nw_src=10.10.10.13 
actions=conjunction(7,1/2),conjunction(11,1/2) 
  table=82, priority=70,ct_state=+est-rel-rpl,ip,reg6=0xd,nw_src=10.10.10.13 
actions=conjunction(6,1/2),conjunction(10,1/2)
  table=82, priority=70,ct_state=+est-rel-rpl,ip,reg5=0x38 
actions=conjunction(10,2/2) 
  table=82, priority=70,ct_state=+new-est,ip,reg5=0x38 
actions=conjunction(11,2/2) 
  table=82, priority=70,conj_id=10,ct_state=+est-rel-rpl,ip,reg5=0x38 
actions=output:56 
  table=82, priority=70,conj_id=11,ct_state=+new-est,ip,reg5=0x38 
actions=ct(commit,zone=NXM_NX_REG6[0..15]),output:56

  The rules for remote IP are ignored!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1728495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728479] [NEW] some security-group rules will be covered.

2017-10-29 Thread Zachary Ma
Public bug reported:

1. create security-group anquanzu01, anquanzu02
2. create vm1 with anquanzu01, anquanzu02, create vm2 with anquanzu02.
3. vm1 can ping vm2 well, but vm2 can not ping vm1.

anquanzu01, anquanzu02 are as follows:
 
[root@172e18e211e96 ~]# neutron security-group-show anquanzu01
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
+--++
| Field| Value  
|
+--++
| created_at   | 2017-10-19T04:14:01Z   
|
| description  |
|
| id   | b089348a-f939-43f8-bdd2-d7b54376f640   
|
| name | anquanzu01 
|
| project_id   | 2acab64182334292a9bf5f3cdd5b3428   
|
| revision_number  | 6  
|
| security_group_rules | {  
|
|  |  "remote_group_id": null,  
|
|  |  "direction": "ingress",   
|
|  |  "protocol": "icmp",   
|
|  |  "description": "",
|
|  |  "tags": [],   
|
|  |  "ethertype": "IPv4",  
|
|  |  "remote_ip_prefix": "0.0.0.0/0",  
|
|  |  "port_range_max": null,   
|
|  |  "updated_at": "2017-10-19T04:26:01Z", 
|
|  |  "security_group_id": 
"b089348a-f939-43f8-bdd2-d7b54376f640",  |
|  |  "port_range_min": null,   
|
|  |  "revision_number": 0, 
|
|  |  "tenant_id": "2acab64182334292a9bf5f3cdd5b3428",  
|
|  |  "created_at": "2017-10-19T04:26:01Z", 
|
|  |  "project_id": "2acab64182334292a9bf5f3cdd5b3428", 
|
|  |  "id": "1b7a4a06-e762-487a-9776-0d9d781f537c"  
|
|  | }  
|
|  | {  
|
|  |  "remote_group_id": null,  
|
|  |  "direction": "egress",
|
|  |  "protocol": null, 
|
|  |  "description": null,  
|
|  |  "tags": [],   
|
|  |  "ethertype": "IPv6",  
|
|  |  "remote_ip_prefix": null, 
|
|  |  "port_range_max": null,   
|
|  |  "updated_at": "2017-10-19T04:14:01Z", 
|
|  |  "security_group_id": 
"b089348a-f939-43f8-bdd2-d7b54376f640",  |
|  |  "port_range_min": null,   
|
|  |  "revision_number": 0, 
|
|  |  "tenant_id": "2acab64182334292a9bf5f3cdd5b3428",  
|
|  |  "created_at": "2017-10-19T04:14:01Z", 
|
|  |  "project_id": "2acab64182334292a9bf5f3cdd5b3428", 
|
|  |  "id": "2e605e9b-9be1-4dd3-a86b-af7b95c476fb"  
|
|  | }  
|
|  | {  
|
|  |  "remote_group_id": 
"b089348a-f939-43f8-bdd2-d7b54376f640",|
|  |  "direction": "ingress",   
|
|  |  "protocol": null,   

[Yahoo-eng-team] [Bug 1727132] [NEW] the example of "detach a port from the QoS policy" is wrong in config qos doc

2017-10-24 Thread Zachary Ma
Public bug reported:

In order to detach a port from the QoS policy, simply update again the port 
configuration.
$ openstack port unset --no-qos-policy 88101e57-76fa-4d12-b0e0-4fc7634b874a
Updated port: 88101e57-76fa-4d12-b0e0-4fc7634b874a

Should be modified as:
$ openstack port unset --qos-policy 88101e57-76fa-4d12-b0e0-4fc7634b874a

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727132

Title:
  the example of "detach a port from the QoS policy" is wrong in config
  qos doc

Status in neutron:
  New

Bug description:
  In order to detach a port from the QoS policy, simply update again the port 
configuration.
  $ openstack port unset --no-qos-policy 88101e57-76fa-4d12-b0e0-4fc7634b874a
  Updated port: 88101e57-76fa-4d12-b0e0-4fc7634b874a

  Should be modified as:
  $ openstack port unset --qos-policy 88101e57-76fa-4d12-b0e0-4fc7634b874a

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1727132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726732] [NEW] [qos]set running vms ingress-bandwidth-limit, then delete running vm, but ovs queue is still residual

2017-10-24 Thread Zachary Ma
Public bug reported:

1.create vm:
[root@172e18e211e96 ~]# nova interface-list mzx4
++--+--+--+---+
| Port State | Port ID  | Net ID
   | IP addresses | MAC Addr  |
++--+--+--+---+
| ACTIVE | f4ae0545-ca86-40c5-b2f7-064e521f13db | 
1464fb8c-3879-4e2a-9833-3aa0882285d5 | 5.5.5.11 | fa:16:3e:57:cb:ba |
++--+--+--+---+
2. Set the policy on the port.
[root@172e18e211e96 ~]# openstack port set --qos-policy bw-limiter 
f4ae0545-ca86-40c5-b2f7-064e521f13db

and check ovs by cmd:
[root@172e18e211e9 ~]# ovs-vsctl list queue
_uuid   : 1e4f2c2d-4116-4484-8eb6-5b13d9de649f
dscp: []
external_ids: {id="tapf4ae0545-ca", queue_type="0"}
other_config: {burst="1000", max-rate="1"}

3.then delete vm in running status,and check ovs again:
[root@172e18e211e9 ~]# ovs-vsctl list queue
_uuid   : 1e4f2c2d-4116-4484-8eb6-5b13d9de649f
dscp: []
external_ids: {id="tapf4ae0545-ca", queue_type="0"}
other_config: {burst="1000", max-rate="1"}

tapf4ae0545-ca has been removed in ovs, but ovs queue is still
residual!!

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

** Summary changed:

- [qos]configure running vms ingress-bandwidth-limit, then delete running 
vm,but ovs queue is still residual
+ [qos]set running vms ingress-bandwidth-limit, then delete running vm,but ovs 
queue is still residual

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1726732

Title:
  [qos]set running vms ingress-bandwidth-limit, then delete running
  vm,but ovs queue is still residual

Status in neutron:
  New

Bug description:
  1.create vm:
  [root@172e18e211e96 ~]# nova interface-list mzx4
  
++--+--+--+---+
  | Port State | Port ID  | Net ID  
 | IP addresses | MAC Addr  |
  
++--+--+--+---+
  | ACTIVE | f4ae0545-ca86-40c5-b2f7-064e521f13db | 
1464fb8c-3879-4e2a-9833-3aa0882285d5 | 5.5.5.11 | fa:16:3e:57:cb:ba |
  
++--+--+--+---+
  2. Set the policy on the port.
  [root@172e18e211e96 ~]# openstack port set --qos-policy bw-limiter 
f4ae0545-ca86-40c5-b2f7-064e521f13db

  and check ovs by cmd:
  [root@172e18e211e9 ~]# ovs-vsctl list queue
  _uuid   : 1e4f2c2d-4116-4484-8eb6-5b13d9de649f
  dscp: []
  external_ids: {id="tapf4ae0545-ca", queue_type="0"}
  other_config: {burst="1000", max-rate="1"}

  3.then delete vm in running status,and check ovs again:
  [root@172e18e211e9 ~]# ovs-vsctl list queue
  _uuid   : 1e4f2c2d-4116-4484-8eb6-5b13d9de649f
  dscp: []
  external_ids: {id="tapf4ae0545-ca", queue_type="0"}
  other_config: {burst="1000", max-rate="1"}

  tapf4ae0545-ca has been removed in ovs, but ovs queue is still
  residual!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1726732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720088] Re: delete vms, but ovs flow table is still residual

2017-09-29 Thread Zachary Ma
Fix proposed to branch: stable/pike
Review: https://review.openstack.org/505356

** Changed in: nova
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1720088

Title:
  delete vms, but ovs flow table is still residual

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  in Pike version, if I delete running vms, the ovs flow table will be
  still residual.

  for examples:

  the first, create vm, named pc1
  [root@bogon ~]# nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 2f91523c-6a4f-434a-a228-0d07ca735e6a | pc1  | ACTIVE | -  | Running 
| net=5.5.5.13 |
  
+--+--+++-+--+

  the second,I directly delete the running virtual machine,

  [root@bogon ~]# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+

  
  then the relevant flow table will be left.

  [root@bogon ~]# ovs-ofctl dump-flows br-int | grep 5.5.5.13
   cookie=0x8231b3d9ff6eecde, duration=189.590s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+est-rel-rpl,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(2,1/2)
   cookie=0x8231b3d9ff6eecde, duration=189.589s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+new-est,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(3,1/2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1720088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720091] [NEW] delete running vms, but ovs flow table is still residual

2017-09-28 Thread Zachary Ma
Public bug reported:

in Pike version, if I delete running vms, the ovs flow table will be
still residual.

for examples:

the first, create vm, named pc1
[root@bogon ~]# nova list
+--+--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
+--+--+++-+--+
| 2f91523c-6a4f-434a-a228-0d07ca735e6a | pc1 | ACTIVE | - | Running | 
net=5.5.5.13 |
+--+--+++-+--+

the second,I directly delete the running virtual machine,

[root@bogon ~]# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+

then the relevant flow table will be left.

[root@bogon ~]# ovs-ofctl dump-flows br-int | grep 5.5.5.13
 cookie=0x8231b3d9ff6eecde, duration=189.590s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+est-rel-rpl,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(2,1/2)
 cookie=0x8231b3d9ff6eecde, duration=189.589s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+new-est,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(3,1/2)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1720091

Title:
  delete running vms, but ovs flow table is still residual

Status in neutron:
  New

Bug description:
  in Pike version, if I delete running vms, the ovs flow table will be
  still residual.

  for examples:

  the first, create vm, named pc1
  [root@bogon ~]# nova list
  
+--+--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  
+--+--+++-+--+
  | 2f91523c-6a4f-434a-a228-0d07ca735e6a | pc1 | ACTIVE | - | Running | 
net=5.5.5.13 |
  
+--+--+++-+--+

  the second,I directly delete the running virtual machine,

  [root@bogon ~]# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+

  then the relevant flow table will be left.

  [root@bogon ~]# ovs-ofctl dump-flows br-int | grep 5.5.5.13
   cookie=0x8231b3d9ff6eecde, duration=189.590s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+est-rel-rpl,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(2,1/2)
   cookie=0x8231b3d9ff6eecde, duration=189.589s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+new-est,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(3,1/2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1720091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718369] [NEW] DBDeadlock occurs when delete router_gateway

2017-09-20 Thread Zachary Ma
Public bug reported:

I have 3 controllers and each controllers are running neutron-server.
When I delete router_gateway, some of the controllers occasionally crash
with DBDeadlock exceptions:

69597:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
[req-31af8910-b348-49c4-9082-8647f1ef94ca 0c806b3af06b4025ad180a6d6213d02c 
9d120e49c3e2484b827597bdde57f850 - default default] Error during notification 
for 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin._delete_dvr_internal_ports--9223372036852560901
 router_gateway, after_delete: DBDeadlock: (pymysql.err.InternalError) (1213, 
u'Deadlock found when trying to get lock; try restarting transaction') [SQL: 
u'DELETE FROM portsecuritybindings WHERE portsecuritybindings.port_id = 
%(port_id)s'] [parameters: {'port_id': u'f7dea1fa-5436-4a43-b0ec-0bef99371375'}]
69598:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
Traceback (most recent call last):
69599:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py", line 171, 
in _notify_loop
69600:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
69601:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py", line 272, in 
_delete_dvr_internal_ports
69602:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
context.elevated(), None, network_id)
69603:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py", line 288, in 
delete_floatingip_agent_gateway_port
69604:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
self._core_plugin.ipam.delete_port(context, p['id'])
69605:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/ipam_pluggable_backend.py", line 
429, in delete_port
69606:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
port['fixed_ips'])
69607:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/ipam_pluggable_backend.py", line 
95, in _ipam_deallocate_ips
69608:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
"external system for %s", addresses)
69609:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
69610:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
self.force_reraise()
69611:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
69612:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
six.reraise(self.type_, self.value, self.tb)
69613:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/ipam_pluggable_backend.py", line 
71, in _ipam_deallocate_ips
69614:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
ipam_subnet = ipam_driver.get_subnet(ip['subnet_id'])
69615:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/ipam/drivers/neutrondb_ipam/driver.py",
 line 267, in get_subnet
69616:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
return NeutronDbSubnet.load(subnet_id, self._context)
69617:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/ipam/drivers/neutrondb_ipam/driver.py",
 line 91, in load
69618:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager ctx, 
neutron_subnet_id)
69619:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/ipam/drivers/neutrondb_ipam/db_api.py",
 line 30, in load_by_neutron_subnet_id
69620:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
context, neutron_subnet_id=neutron_subnet_id)
69621:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/objects/base.py", line 463, in 
get_objects
69622:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager with 
context.session.begin(subtransactions=True):
69623:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 824, in 
begin
69624:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager 
self, nested=nested)
69625:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 218, in 
__init__
69626:2017-09-20 10:58:10.606 2114 ERROR neutron_lib.callbacks.manager