[Yahoo-eng-team] [Bug 2025202] [NEW] Execute neutron-ovn-db-sync-util report TypeError

2023-06-27 Thread ZhouHeng
_util   File 
"/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util next(self.gen)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/api.py", line 110, in 
transaction
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util del 
self._nested_txns_map[cur_thread_id]
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/api.py", line 61, in 
__exit__
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util self.result = 
self.commit()
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 64, in commit
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util raise result.ex
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util 
txn.results.put(txn.do_commit())
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 92, in do_commit
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util 
command.run_idl(txn)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/root/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/commands.py", line 
216, in run_idl
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util port = 
self.api.lookup('Logical_Switch_Port', port_id)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py",
 line 181, in lookup
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util return 
self._lookup(table, record)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py",
 line 203, in _lookup
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util uuid_ = 
uuid.UUID(record)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib64/python3.6/uuid.py", line 134, in __init__
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util raise 
TypeError('one of the hex, bytes, bytes_le, fields, '
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util TypeError: one of 
the hex, bytes, bytes_le, fields, or int arguments must be given
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util

** Affects: neutron
 Importance: Undecided
 Assignee: ZhouHeng (zhouhenglc)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZhouHeng (zhouhenglc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025202

Title:
  Execute neutron-ovn-db-sync-util  report TypeError

Status in neutron:
  New

Bug description:
  A TypeError was thrown during a synchronization 
command(eutron-ovn-db-sync-util) execution. By checking the error call stack, 
it was found that it was an error during the creation of QoS. After analysis, 
there should be a port in the Neutron database, but not in the ovn-nb database. 
At this time, when executing the UpdateLSwitchQosOptionsCommand to create a 
logical port and update the QoS, it was found that the port_id is None. 
tracking variable  port_id is obtained by executing AddLSwitchPortCommand. It 
should be that this command did not set the port_id correctly caused. Analyzing 
AddLSwitchPortCommand, it was found that if the port already exists, no result 
was set. 
  This seems a bit contradictory. It was determined earlier that the port does 
not exist, but later it does. I think this situation may occur when executing 
synchronization commands and calling the API that creates the port. This 
operation is not very reasonable.

  But I think the AddLSwitchPortCommand command should return consistent
  results. This issue should be fixed.

  
  ERROR Message:

  2023-06-26 11:06:24.385 345 WARNING 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync [None 
req-01d7864c-e3a6-409a-a852-2f6ea869fdae - - - - -] Port found in Neutron but 
not in OVN DB, port_id=ae5a8d95-e59f-465a-833d-28b3d0fabb2d
  2023-06-26 11:06:24.440 345 ERROR ovsdbapp.backend.ovs_idl.transaction [None 
req-01d7864c-e3a6-409a-a852-2f6ea869fdae - - - - -] Traceback (most recent call 
last):
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run
  txn.results.put(txn.do_commit())
File 
"/var/lib/kol

[Yahoo-eng-team] [Bug 2023130] [NEW] OVN DB sync acls Timeout

2023-06-06 Thread ZhouHeng
Public bug reported:

There are over 20 security group rules in the neutron database. When
starting the neutron server synchronization, after a long period of
time, the synchronization fails with an ovsdb.exceptions.
TimeoutException message

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2023130

Title:
  OVN DB sync acls Timeout

Status in neutron:
  New

Bug description:
  There are over 20 security group rules in the neutron database.
  When starting the neutron server synchronization, after a long period
  of time, the synchronization fails with an ovsdb.exceptions.
  TimeoutException message

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2023130/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2021457] [NEW] [fwaas]The firewall group without any port in active status

2023-05-29 Thread ZhouHeng
Public bug reported:

After removing all the ports associated with the firewall group, the 
probability of the firewall group not changing to INACTIV state
https://e569d88c3a178e9f3b04-ff932221a90eca4b99ad37ac55c7d61c.ssl.cf1.rackcdn.com/884333/6/check/neutron-tempest-plugin-fwaas/8b67ccf/testr_results.html

** Affects: neutron
 Importance: Undecided
 Assignee: ZhouHeng (zhouhenglc)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZhouHeng (zhouhenglc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2021457

Title:
  [fwaas]The firewall group without any port in  active status

Status in neutron:
  New

Bug description:
  After removing all the ports associated with the firewall group, the 
probability of the firewall group not changing to INACTIV state
  
https://e569d88c3a178e9f3b04-ff932221a90eca4b99ad37ac55c7d61c.ssl.cf1.rackcdn.com/884333/6/check/neutron-tempest-plugin-fwaas/8b67ccf/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2021457/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2016960] [NEW] neutron fwaas support l2 firewall for ovn driver

2023-04-19 Thread ZhouHeng
Public bug reported:

ML2/OVN as the default driver for neutron. the security group only has a 
whitelist. Fwass supports black and white lists, but only supports ml2/ovs.
Ovn ACL supports setting rejection rules. neutron fwaas can support l2 firewall 
for ovn driver.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2016960

Title:
  neutron fwaas support l2 firewall for ovn driver

Status in neutron:
  New

Bug description:
  ML2/OVN as the default driver for neutron. the security group only has a 
whitelist. Fwass supports black and white lists, but only supports ml2/ovs.
  Ovn ACL supports setting rejection rules. neutron fwaas can support l2 
firewall for ovn driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2016960/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2009705] Re: [FWaaS ]Openstack Zed - firewall group status doesn't change to ACTIVE.

2023-03-26 Thread ZhouHeng
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2009705

Title:
  [FWaaS ]Openstack Zed - firewall group status doesn't change to
  ACTIVE.

Status in neutron:
  Invalid

Bug description:
  Firewall group status doesn't change to ACTIVE,. The same behavior
  with default firewall group.

  $ openstack firewall group show 3e25ff35-65fc-4438-8684-806904186b8e
  +---+--+
  | Field | Value|
  +---+--+
  | Description   |  |
  | Egress Policy ID  | c17c818a-d6aa-4100-89f5-76e2d6cbb790 |
  | ID| 3e25ff35-65fc-4438-8684-806904186b8e |
  | Ingress Policy ID | 17d9d11c-ad69-4773-b853-db686da86994 |
  | Name  |  |
  | Ports | ['f890e2c4-019e-494d-bd77-04fcdd683b4c'] |
  | Project   | 1b0ab3547b42494096ac06400d65671a |
  | Shared| False|
  | State | UP   |
  | Status| INACTIVE |
  | project_id| 1b0ab3547b42494096ac06400d65671a |
  +---+--+

  $ openstack firewall group policy show c17c818a-d6aa-4100-89f5-76e2d6cbb790
  ++--+
  | Field  | Value|
  ++--+
  | Audited| False|
  | Description|  |
  | Firewall Rules | ['0cffb2ac-ab27-4b05-a853-b7f3f9472b3e'] |
  | ID | c17c818a-d6aa-4100-89f5-76e2d6cbb790 |
  | Name   | block80  |
  | Project| 1b0ab3547b42494096ac06400d65671a |
  | Shared | False|
  | project_id | 1b0ab3547b42494096ac06400d65671a |
  ++--+

  
  $ openstack firewall group policy show 17d9d11c-ad69-4773-b853-db686da86994
  ++--+
  | Field  | Value|
  ++--+
  | Audited| False|
  | Description|  |
  | Firewall Rules | ['c9c0c1b6-2400-41e2-9c29-b3c1212f2470'] |
  | ID | 17d9d11c-ad69-4773-b853-db686da86994 |
  | Name   | allowAll |
  | Project| 1b0ab3547b42494096ac06400d65671a |
  | Shared | False|
  | project_id | 1b0ab3547b42494096ac06400d65671a |
  ++--+

  
  $ openstack firewall group rule show 0cffb2ac-ab27-4b05-a853-b7f3f9472b3e
  ++--+
  | Field  | Value|
  ++--+
  | Action | deny |
  | Description|  |
  | Destination IP Address | 192.168.2.0/24   |
  | Destination Port   | 80   |
  | Enabled| True |
  | ID | 0cffb2ac-ab27-4b05-a853-b7f3f9472b3e |
  | IP Version | 4|
  | Name   |  |
  | Project| 1b0ab3547b42494096ac06400d65671a |
  | Protocol   | tcp  |
  | Shared | False|
  | Source IP Address  | None |
  | Source Port| None |
  | firewall_policy_id | ['c17c818a-d6aa-4100-89f5-76e2d6cbb790'] |
  | project_id | 1b0ab3547b42494096ac06400d65671a |
  ++--+

  
  $ openstack firewall group rule show c9c0c1b6-2400-41e2-9c29-b3c1212f2470
  ++--+
  | Field  | Value|
  ++--+
  | Action | 

[Yahoo-eng-team] [Bug 2008858] [NEW] Call the api and do not return for a long time

2023-02-28 Thread ZhouHeng
40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 948, in execute
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines return 
meth(self, multiparams, params)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", 
line 269, in _execute_on_connection
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines return 
connection._execute_clauseelement(self, multiparams, params)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1060, in _execute_clauseelement
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines compiled_sql, 
distilled_params
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1200, in _execute_context
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines context)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1409, in _handle_dbapi_exception
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines 
util.raise_from_cause(newraise, exc_info)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", 
line 265, in raise_from_cause
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines 
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", 
line 248, in reraise
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines raise 
value.with_traceback(tb)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1193, in _execute_context
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines context)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", 
line 509, in do_execute
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines 
cursor.execute(statement, parameters)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/cursors.py", line 170, 
in execute
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines result = 
self._query(query)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/cursors.py", line 328, 
in _query
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines conn.query(q)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
516, in query
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
727, in _read_query_result
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines result.read()
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
1066, in read
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines first_packet = 
self.connection._read_packet()
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
656, in _read_packet
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines packet_header = 
self._read_bytes(4)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
702, in _read_bytes
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines 
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines 
oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 
'Lost connection to MySQL server during query') [SQL: 'SELECT 1'] (Background 
on this error at: http://sqlalche.me/e/e3q8)
2023-02-18 03:40:01.845 14 ERROR oslo_db.sqlalchemy.engines

** Affects: neutron
 Importance: Undecided
 Assignee: ZhouHeng (zhouhenglc)
 Status: In Progress

-- 
You received this bug notifi

[Yahoo-eng-team] [Bug 2007327] [NEW] Don't send rpc messages about security groups

2023-02-14 Thread ZhouHeng
Public bug reported:

When we use the ovn driver, the security group is implemented by the ACL of 
ovn. 
There is no need to send rpc messages.

If do not send rpc messages, create ports and update ports, they will
not be affected by message queue status.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2007327

Title:
  Don't send rpc messages about security groups

Status in neutron:
  New

Bug description:
  When we use the ovn driver, the security group is implemented by the ACL of 
ovn. 
  There is no need to send rpc messages.

  If do not send rpc messages, create ports and update ports, they will
  not be affected by message queue status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2007327/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989391] [NEW] Allowed Address Pairs which is netmask doesn't work

2022-09-12 Thread ZhouHeng
Public bug reported:

I have an environment ovn==21.03

I set one port's allowed address pairs is 192.168.1.12/24, I found only
ip=192.168.1.12 traffic can pass through, other ip eg:192.168.1.11 can
not. if I set allowed address pairs is 192.168.1.0/24, ip=192.168.1.11
traffic can pass through.

I looked at the code of ovn: https://github.com/ovn-
org/ovn/blob/98bac97c656c720780fae9b1e4c700eb13c36c29/northd/ovn-
northd.c#L4333

I think we should convert it and send it to ovn.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989391

Title:
  Allowed Address Pairs which is netmask doesn't work

Status in neutron:
  In Progress

Bug description:
  I have an environment ovn==21.03

  I set one port's allowed address pairs is 192.168.1.12/24, I found
  only ip=192.168.1.12 traffic can pass through, other ip
  eg:192.168.1.11 can not. if I set allowed address pairs is
  192.168.1.0/24, ip=192.168.1.11 traffic can pass through.

  I looked at the code of ovn: https://github.com/ovn-
  org/ovn/blob/98bac97c656c720780fae9b1e4c700eb13c36c29/northd/ovn-
  northd.c#L4333

  I think we should convert it and send it to ovn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989391/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1983530] [NEW] [ovn]router_interface port probability cannot be up

2022-08-03 Thread ZhouHeng
Public bug reported:

I have en environment, with 5 neutron-server(W version), 5 ovn(21.03.1)

When adding an interface to the router, the state of the interface is sometimes 
down,
and it will not become up after a long time. Restarting the neutron server can 
become up.

I check logical_switch_port by "ovn-nbctl list logical_switch_port ",  
the status
of lsp is up, and the traffic can be forwarded normally.

This should only be a neutron save state problem.

In the past, when creating vm, the ports have been unable to be up, and the 
reason has not
been found. The phenomenon is the same as that of the router interface.

In the process of adding a routing interface, I wrote a script to listen to 
logical_switch_port
change, I found: before lsp becomes up=true, lsp's up=[], which is not 
up=[false], not 
matchd LogicalSwitchPortUpdateUpEvent. I don't know why ovn's notification 
events are 
sometimes different, but I think it should be repaired here in neutron

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  I have en environment, with 5 neutron-server(W version), 5 ovn(21.03.1)
  
- When adding an interface to the router, the state of the interface is 
sometimes down, 
+ When adding an interface to the router, the state of the interface is 
sometimes down,
  and it will not become up after a long time. Restarting the neutron server 
can become up.
  
- I check logical_switch_port by "ovn-nbctl list logical_switch_port 
",  the status 
+ I check logical_switch_port by "ovn-nbctl list logical_switch_port 
",  the status
  of lsp is up, and the traffic can be forwarded normally.
  
  This should only be a neutron save state problem.
  
- In the past, when creating vm, the ports have been unable to be up, and the 
reason has not 
+ In the past, when creating vm, the ports have been unable to be up, and the 
reason has not
  been found. The phenomenon is the same as that of the router interface.
  
- In the process of adding a routing interface, I wrote a script to listen to 
logical_switch_port 
- change, I found: before lsp becomes up=true, lsp's up=[], which is not 
up=[false], not matchd
- LogicalSwitchPortUpdateUpEvent. I don't know why ovn's notification events 
are sometimes different, 
- but I think it should be repaired here in neutron
+ In the process of adding a routing interface, I wrote a script to listen to 
logical_switch_port
+ change, I found: before lsp becomes up=true, lsp's up=[], which is not 
up=[false], not 
+ matchd LogicalSwitchPortUpdateUpEvent. I don't know why ovn's notification 
events are 
+ sometimes different, but I think it should be repaired here in neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1983530

Title:
  [ovn]router_interface port probability cannot be up

Status in neutron:
  New

Bug description:
  I have en environment, with 5 neutron-server(W version), 5
  ovn(21.03.1)

  When adding an interface to the router, the state of the interface is 
sometimes down,
  and it will not become up after a long time. Restarting the neutron server 
can become up.

  I check logical_switch_port by "ovn-nbctl list logical_switch_port 
",  the status
  of lsp is up, and the traffic can be forwarded normally.

  This should only be a neutron save state problem.

  In the past, when creating vm, the ports have been unable to be up, and the 
reason has not
  been found. The phenomenon is the same as that of the router interface.

  In the process of adding a routing interface, I wrote a script to listen to 
logical_switch_port
  change, I found: before lsp becomes up=true, lsp's up=[], which is not 
up=[false], not 
  matchd LogicalSwitchPortUpdateUpEvent. I don't know why ovn's notification 
events are 
  sometimes different, but I think it should be repaired here in neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1983530/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978039] [NEW] [ovn]Floating IP adds distributed attributes

2022-06-08 Thread ZhouHeng
Public bug reported:

By set config ovn.enable_distributed_floating_ip, we can set whether the 
floating IP of the entire cluster is distributed or centralized.
There is no way to set a floating IP separately, and the flow of floating IP 
cannot be finely controlled.

If the backend is ovn, we can set dnat_and_snat's external_mac
determines whether it is distributed or centralized, each floating IP
can be easily set individually.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978039

Title:
  [ovn]Floating IP adds distributed attributes

Status in neutron:
  New

Bug description:
  By set config ovn.enable_distributed_floating_ip, we can set whether the 
floating IP of the entire cluster is distributed or centralized.
  There is no way to set a floating IP separately, and the flow of floating IP 
cannot be finely controlled.

  If the backend is ovn, we can set dnat_and_snat's external_mac
  determines whether it is distributed or centralized, each floating IP
  can be easily set individually.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1978039/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978035] [NEW] remove unused updated_at parameter for AgentCache.update

2022-06-08 Thread ZhouHeng
Public bug reported:

AgentCache method update[1] has a parameter "updated_at", I didn't find this 
parameter passed in anywhere except for some unit tests. currently we have used 
nb_cfg_timestamp[2] as agent updated time. there are no other scenarios for 
this parameter.
can we remove this parameter?

[1] 
https://opendev.org/openstack/neutron/src/commit/e44dbe98e82fddac72723caa9357daae0f0ab76f/neutron/plugins/ml2/drivers/ovn/agent/neutron_agent.py#L241
[2] https://review.opendev.org/c/openstack/neutron/+/802834

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- AgentCache method update[1] has a parameter "updated_at", I didn't find this 
parameter passed in anywhere except for some unit tests. currently we have used 
nb_cfg_timestamp[2] as agent updated time. 
+ AgentCache method update[1] has a parameter "updated_at", I didn't find this 
parameter passed in anywhere except for some unit tests. currently we have used 
nb_cfg_timestamp[2] as agent updated time. there are no other scenarios for 
this parameter.
  can we remove this parameter?
- 
  
  [1] 
https://opendev.org/openstack/neutron/src/commit/e44dbe98e82fddac72723caa9357daae0f0ab76f/neutron/plugins/ml2/drivers/ovn/agent/neutron_agent.py#L241
  [2] https://review.opendev.org/c/openstack/neutron/+/802834

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978035

Title:
  remove unused updated_at  parameter for AgentCache.update

Status in neutron:
  New

Bug description:
  AgentCache method update[1] has a parameter "updated_at", I didn't find this 
parameter passed in anywhere except for some unit tests. currently we have used 
nb_cfg_timestamp[2] as agent updated time. there are no other scenarios for 
this parameter.
  can we remove this parameter?

  [1] 
https://opendev.org/openstack/neutron/src/commit/e44dbe98e82fddac72723caa9357daae0f0ab76f/neutron/plugins/ml2/drivers/ovn/agent/neutron_agent.py#L241
  [2] https://review.opendev.org/c/openstack/neutron/+/802834

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1978035/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977629] [NEW] [ovn]agent's heartbeat time is error

2022-06-03 Thread ZhouHeng
/a   

   |
+---+--+

** Affects: neutron
 Importance: Undecided
 Assignee: ZhouHeng (zhouhenglc)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZhouHeng (zhouhenglc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1977629

Title:
  [ovn]agent's heartbeat time is error

Status in neutron:
  New

Bug description:
  when we show an agent, we can check agent's alive and heartbeat time. but now 
agent's alive is False, 
  it's last_heart_at close to the current time, and each show will refresh the 
last_heart_at.

  The code[1] shows that the heartbeat time returns the current time, we
  should return to the real update time

  
  [1] 
https://opendev.org/openstack/neutron/src/commit/d253b9fd08b969ebab09a5624df127a4bf1bfb1f/neutron/plugins/ml2/drivers/ovn/agent/neutron_agent.py#L85

  stack@openstack:~$ openstack network agent show 
d02fc630-4ac8-466f-9795-38e8177cfc55
  
+---+--+
  | Field | Value   

 |
  
+---+--+
  | admin_state_up| UP  

 |
  | agent_type| OVN Controller Gateway agent

 |
  | alive | XXX 

 |
  | availability_zone | 

 |
  | binary| ovn-controller  

 |
  | configuration | {'chassis_name': 
'd02fc630-4ac8-466f-9795-38e8177cfc55', 'bridge-mappings': 'public:br-ex'}  

|
  | created_at| None

 |
  | description   | 

 |
  | ha_state  | None

 |
  | host  | openstack   

 |
  | id| d02fc630-4ac8-466f-9795-38e8177cfc55

 |
  | last_heartbeat_at | 2022-06-04 01:59:27.926511  

 |
  | location  | Munch({'cloud': '', 'region_name': 'RegionOne', 'zone': 
'', 'project': Munch({'id': '40e85036922248828c47cededb23036b', 'name': 
'admin', 'domain_id': 'default', 'domain_name': None})}) |
  | name  | None

 |
  | resources_synced  | None

 |
  | started_at| N

[Yahoo-eng-team] [Bug 1971646] [NEW] [api]add port_forwarding_id when list floatingip

2022-05-04 Thread ZhouHeng
Public bug reported:

when we enable "port_forwarding" plugin and list floating ip, we can get like:
{
"floatingips": [
{
"router_id": "0303bf18-2c52-479c-bd68-e0ad712a1639",
"description": "for test with port forwarding",
"dns_domain": "my-domain.org.",
"dns_name": "myfip3",
"created_at": "2018-06-15T02:12:48Z",
"updated_at": "2018-06-15T02:12:57Z",
"revision_number": 1,
"project_id": "4969c491a3c74ee4af974e6d800c62de",
"tenant_id": "4969c491a3c74ee4af974e6d800c62de",
"floating_network_id": "376da547-b977-4cfe-9cba-275c80debf57",
"fixed_ip_address": null,
"floating_ip_address": "172.24.4.42",
"port_id": null,
"id": "898b198e-49f7-47d6-a7e1-53f626a548e6",
"status": "ACTIVE",
"tags": [],
"port_forwardings": [
{
"protocol": "tcp",
"internal_ip_address": "10.0.0.19",
"internal_port": 25,
"external_port": 2225
},
{
"protocol": "tcp",
"internal_ip_address": "10.0.0.18",
"internal_port": 1,
"external_port": 8786
}
],
"qos_network_policy_id": "174dd0c1-a4eb-49d4-a807-ae80246d82f4",
"qos_policy_id": "29d5e02e-d5ab-4929-bee4-4a9fc12e22ae"
}
]
}

if we list floating ip and want to pperate a port forwarding, we still need 
call "/v2.0/floatingips/{floatingip_id}/port_forwardings" and get 
port_forwarding "id". 
this is inefficient.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1971646

Title:
  [api]add port_forwarding_id when list floatingip

Status in neutron:
  New

Bug description:
  when we enable "port_forwarding" plugin and list floating ip, we can get like:
  {
  "floatingips": [
  {
  "router_id": "0303bf18-2c52-479c-bd68-e0ad712a1639",
  "description": "for test with port forwarding",
  "dns_domain": "my-domain.org.",
  "dns_name": "myfip3",
  "created_at": "2018-06-15T02:12:48Z",
  "updated_at": "2018-06-15T02:12:57Z",
  "revision_number": 1,
  "project_id": "4969c491a3c74ee4af974e6d800c62de",
  "tenant_id": "4969c491a3c74ee4af974e6d800c62de",
  "floating_network_id": "376da547-b977-4cfe-9cba-275c80debf57",
  "fixed_ip_address": null,
  "floating_ip_address": "172.24.4.42",
  "port_id": null,
  "id": "898b198e-49f7-47d6-a7e1-53f626a548e6",
  "status": "ACTIVE",
  "tags": [],
  "port_forwardings": [
  {
  "protocol": "tcp",
  "internal_ip_address": "10.0.0.19",
  "internal_port": 25,
  "external_port": 2225
  },
  {
  "protocol": "tcp",
  "internal_ip_address": "10.0.0.18",
  "internal_port": 1,
  "external_port": 8786
  }
  ],
  "qos_network_policy_id": "174dd0c1-a4eb-49d4-a807-ae80246d82f4",
  "qos_policy_id": "29d5e02e-d5ab-4929-bee4-4a9fc12e22ae"
  }
  ]
  }

  if we list floating ip and want to pperate a port forwarding, we still need 
call "/v2.0/floatingips/{floatingip_id}/port_forwardings" and get 
port_forwarding "id". 
  this is inefficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1971646/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1966354] [NEW] [test][unit] The method of creating resources does not support setting project_ id

2022-03-24 Thread ZhouHeng
Public bug reported:

Some basic unit test classes in neutron, contain basic methods for creating 
resources, which are often referenced by other unit tests.
eg:
MeteringPluginDbTestCaseMixin.metering_label
MeteringPluginDbTestCaseMixin.metering_label_rule
NeutronDbPluginV2TestCase.network
NeutronDbPluginV2TestCase.subnet
NeutronDbPluginV2TestCase.subnetpool
NeutronDbPluginV2TestCase.port
AddressScopeTestCase.address_scope
L3NatExtensionTestCase.router

these methods only distinguish parameter "tenant_id", if the parameter 
"project_id" passed into, will raise error or not working.
to write a new unit test, need to set "project_id", and then use the discarded 
"tenant_id" is not appropriate.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1966354

Title:
  [test][unit] The method of creating  resources does not support
  setting project_ id

Status in neutron:
  In Progress

Bug description:
  Some basic unit test classes in neutron, contain basic methods for creating 
resources, which are often referenced by other unit tests.
  eg:
  MeteringPluginDbTestCaseMixin.metering_label
  MeteringPluginDbTestCaseMixin.metering_label_rule
  NeutronDbPluginV2TestCase.network
  NeutronDbPluginV2TestCase.subnet
  NeutronDbPluginV2TestCase.subnetpool
  NeutronDbPluginV2TestCase.port
  AddressScopeTestCase.address_scope
  L3NatExtensionTestCase.router

  these methods only distinguish parameter "tenant_id", if the parameter 
"project_id" passed into, will raise error or not working.
  to write a new unit test, need to set "project_id", and then use the 
discarded "tenant_id" is not appropriate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1966354/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958593] [NEW] [ovn]neutron-server allow more efficient reconnections when connecting to clustered OVSDB servers.

2022-01-20 Thread ZhouHeng
Public bug reported:

When our cluster has a lot of data, once theNB/SB needs to be
reconnected (the NB/SB is disconnected or the leader switches), neutron
needs to resynchronize and process all the data from NB/SB, which will
occupy resources. Moreover, creating a virtual machine during
reconnection may cause a timeout waiting for network-vif-plugged.

python-ovs support for monitor_cond_since / update3 to allow more efficient 
reconnections when connecting to clustered
OVSDB servers[1].

neutron-server only uses python-ovs. it should not be care whether it is
fast synchronization or full synchronization. It only needs to update
python-ovs to achieve fast synchronization.

But neutron-sever monitors the database schema changes[2]. each
reconnection triggers a create database event, call updaet_tables, and
clear cache. this means that fast synchronization cannot be used.

can we not support updated or downgraded(now it only for Table
'Chassis_Private') while neutron-server is running?


[1] 
https://github.com/openvswitch/ovs/commit/46d44cf3be0dbf4a44cebea3b279b3d16a326796
[2] https://review.opendev.org/c/openstack/neutron/+/760967

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958593

Title:
  [ovn]neutron-server allow more efficient reconnections when connecting
  to clustered OVSDB servers.

Status in neutron:
  New

Bug description:
  When our cluster has a lot of data, once theNB/SB needs to be
  reconnected (the NB/SB is disconnected or the leader switches),
  neutron needs to resynchronize and process all the data from NB/SB,
  which will occupy resources. Moreover, creating a virtual machine
  during reconnection may cause a timeout waiting for network-vif-
  plugged.

  python-ovs support for monitor_cond_since / update3 to allow more efficient 
reconnections when connecting to clustered
  OVSDB servers[1].

  neutron-server only uses python-ovs. it should not be care whether it
  is fast synchronization or full synchronization. It only needs to
  update python-ovs to achieve fast synchronization.

  But neutron-sever monitors the database schema changes[2]. each
  reconnection triggers a create database event, call updaet_tables, and
  clear cache. this means that fast synchronization cannot be used.

  can we not support updated or downgraded(now it only for Table
  'Chassis_Private') while neutron-server is running?

  
  [1] 
https://github.com/openvswitch/ovs/commit/46d44cf3be0dbf4a44cebea3b279b3d16a326796
  [2] https://review.opendev.org/c/openstack/neutron/+/760967

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958593/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958513] [NEW] [test]The unit test file ovn_l3.test_plugin cannot be run alone

2022-01-20 Thread ZhouHeng
Public bug reported:

python3 -m testtools.run 
neutron.tests.unit.services.ovn_l3.test_plugin.TestOVNL3RouterPlugin.test_schedule_unhosted_gateways
/usr/lib64/python3.6/runpy.py:125: RuntimeWarning: 'testtools.run' found in 
sys.modules after import of package 'testtools', but prior to execution of 
'testtools.run'; this may result in unpredictable behaviour
  warn(RuntimeWarning(msg))
Tests running...
==
ERROR: 
neutron.tests.unit.services.ovn_l3.test_plugin.TestOVNL3RouterPlugin.test_schedule_unhosted_gateways
--
Traceback (most recent call last):
  File "/root/neutron/neutron/tests/unit/services/ovn_l3/test_plugin.py", line 
61, in setUp
cfg.CONF.set_override('max_header_size', 38, group='ml2_type_geneve')
  File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 2077, 
in __inner
result = f(self, *args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 2460, 
in set_override
opt_info = self._get_opt_info(name, group)
  File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 2869, 
in _get_opt_info
group = self._get_group(group)
  File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 2838, 
in _get_group
raise NoSuchGroupError(group_name)
oslo_config.cfg.NoSuchGroupError: no such group [ml2_type_geneve]

Ran 1 test in 0.074s
FAILED (failures=1)

Should add from neutron.plugins.ml2.drivers import type_geneve

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958513

Title:
  [test]The unit test file ovn_l3.test_plugin cannot be run alone

Status in neutron:
  In Progress

Bug description:
  python3 -m testtools.run 
neutron.tests.unit.services.ovn_l3.test_plugin.TestOVNL3RouterPlugin.test_schedule_unhosted_gateways
  /usr/lib64/python3.6/runpy.py:125: RuntimeWarning: 'testtools.run' found in 
sys.modules after import of package 'testtools', but prior to execution of 
'testtools.run'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
  Tests running...
  ==
  ERROR: 
neutron.tests.unit.services.ovn_l3.test_plugin.TestOVNL3RouterPlugin.test_schedule_unhosted_gateways
  --
  Traceback (most recent call last):
File "/root/neutron/neutron/tests/unit/services/ovn_l3/test_plugin.py", 
line 61, in setUp
  cfg.CONF.set_override('max_header_size', 38, group='ml2_type_geneve')
File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 
2077, in __inner
  result = f(self, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 
2460, in set_override
  opt_info = self._get_opt_info(name, group)
File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 
2869, in _get_opt_info
  group = self._get_group(group)
File "/usr/local/lib/python3.6/site-packages/oslo_config/cfg.py", line 
2838, in _get_group
  raise NoSuchGroupError(group_name)
  oslo_config.cfg.NoSuchGroupError: no such group [ml2_type_geneve]

  Ran 1 test in 0.074s
  FAILED (failures=1)

  Should add from neutron.plugins.ml2.drivers import type_geneve

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958513/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958501] [NEW] [ovn]Refusing to bind port to dead agent

2022-01-20 Thread ZhouHeng
Public bug reported:

When the driver of ML2 is configured as ovs, if the agent status is not
alive, the binding port will be refused.

However, when ovn is used as the driver of ML2, it can bind normally,
and the portt be active, and it is easy to create a virtual machine and
wait for network-vif-plugged timeout

we should refuse to bind port to dead agent

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958501

Title:
  [ovn]Refusing to bind port to dead agent

Status in neutron:
  New

Bug description:
  When the driver of ML2 is configured as ovs, if the agent status is
  not alive, the binding port will be refused.

  However, when ovn is used as the driver of ML2, it can bind normally,
  and the portt be active, and it is easy to create a virtual machine
  and wait for network-vif-plugged timeout

  we should refuse to bind port to dead agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958501/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958364] [NEW] [ovn]Set NB/SB connection inactivity_probe does not work for cluster

2022-01-19 Thread ZhouHeng
Public bug reported:

If ovn is single node, we set config like:
[ovn]
ovn_nb_connection = tcp:100.2.223.2:6641
ovn_sb_connection = tcp:100.2.223.2:6642
The NB/SB connection inactivity_probe can be set correctly.

When ovn is in cluster deployment, config like:

[ovn]
ovn_nb_connection = 
tcp:100.2.223.2:6641,tcp:100.2.223.12:6641,tcp:100.2.223.30:6641
ovn_sb_connection = 
tcp:100.2.223.2:6642,tcp:100.2.223.12:6642,tcp:100.2.223.30:6642

Set NB/SB connection inactivity_probe does not work right.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958364

Title:
  [ovn]Set NB/SB connection inactivity_probe does not work for cluster

Status in neutron:
  In Progress

Bug description:
  If ovn is single node, we set config like:
  [ovn]
  ovn_nb_connection = tcp:100.2.223.2:6641
  ovn_sb_connection = tcp:100.2.223.2:6642
  The NB/SB connection inactivity_probe can be set correctly.

  When ovn is in cluster deployment, config like:

  [ovn]
  ovn_nb_connection = 
tcp:100.2.223.2:6641,tcp:100.2.223.12:6641,tcp:100.2.223.30:6641
  ovn_sb_connection = 
tcp:100.2.223.2:6642,tcp:100.2.223.12:6642,tcp:100.2.223.30:6642

  Set NB/SB connection inactivity_probe does not work right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958364/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958353] [NEW] [ovn]Gateway port is down after gateway chassis changed

2022-01-19 Thread ZhouHeng
Public bug reported:

ml2 driver: ovn
gateway chassis: nodeA, nodeB, nodeC

step 1: create a router and set gateway, the gateway port status is active, and 
bound nodeA.
step 2: down nodeA, gateway port status change to down, and bound nodeB.
step3: down nodeB, gateway port status change to active, and bound nodeC.

Gateway port status should always be active!

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958353

Title:
  [ovn]Gateway port is down after gateway chassis changed

Status in neutron:
  In Progress

Bug description:
  ml2 driver: ovn
  gateway chassis: nodeA, nodeB, nodeC

  step 1: create a router and set gateway, the gateway port status is active, 
and bound nodeA.
  step 2: down nodeA, gateway port status change to down, and bound nodeB.
  step3: down nodeB, gateway port status change to active, and bound nodeC.

  Gateway port status should always be active!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958353/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958225] [NEW] [ovn]chassis available zone changed, not reschedule router gateway_chassis

2022-01-18 Thread ZhouHeng
Public bug reported:

If neutron uses ovn-router as the router plugin, rescheduling is performed when 
the physical network or gateway attributes of the chassis change to ensure that 
the gateway bound to the router meets the requirements.
but chassis's available zone changed, can not trigger rescheduler.

** Affects: neutron
 Importance: Medium
 Assignee: ZhouHeng (zhouhenglc)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958225

Title:
  [ovn]chassis available zone changed,  not reschedule router
  gateway_chassis

Status in neutron:
  In Progress

Bug description:
  If neutron uses ovn-router as the router plugin, rescheduling is performed 
when the physical network or gateway attributes of the chassis change to ensure 
that the gateway bound to the router meets the requirements.
  but chassis's available zone changed, can not trigger rescheduler.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958225/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938478] [NEW] [ovn]agents alive status error after restarting neutron server

2021-07-29 Thread ZhouHeng
Public bug reported:

I have 3 ovn-controller nodes. 3 nodes run normal.
through run  'openstack network agnet list': 3 agents alive.

then simulate a node failure, stop 1 ovn-controller.a minute later, list
agent, you can find a node down. this seems normal.


Restart neutron at this time and list agents, 3 all agents are alive.this seems 
to be a problem. 
a minute later, list agent, you can find a node down. this seems normal again.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938478

Title:
  [ovn]agents alive status error after restarting neutron server

Status in neutron:
  New

Bug description:
  I have 3 ovn-controller nodes. 3 nodes run normal.
  through run  'openstack network agnet list': 3 agents alive.

  then simulate a node failure, stop 1 ovn-controller.a minute later,
  list agent, you can find a node down. this seems normal.

  
  Restart neutron at this time and list agents, 3 all agents are alive.this 
seems to be a problem. 
  a minute later, list agent, you can find a node down. this seems normal again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938478/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938261] [NEW] [ovn]Router scheduler failing for config "default_availability_zones"

2021-07-28 Thread ZhouHeng
Public bug reported:

I have 3 gateway chassis and all available zones are nova, have 1 chassis.
The default_availability_zones=zone1 are configured in the neutron.conf.

I create router and not set availability_zone_hits, I can create router
success and router's availability_zones=zone1, though ovn-nbctl command,
can found router's gateway_chassis include all chassis(4 nodes).

I think should fail in this case, indicating that the available_zone
does not exist.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938261

Title:
  [ovn]Router scheduler failing for config  "default_availability_zones"

Status in neutron:
  New

Bug description:
  I have 3 gateway chassis and all available zones are nova, have 1 chassis.
  The default_availability_zones=zone1 are configured in the neutron.conf.

  I create router and not set availability_zone_hits, I can create
  router success and router's availability_zones=zone1, though ovn-nbctl
  command, can found router's gateway_chassis include all chassis(4
  nodes).

  I think should fail in this case, indicating that the available_zone
  does not exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938261/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1935959] [NEW] [OVN]Unable to access port forwarding

2021-07-13 Thread ZhouHeng
Public bug reported:

ovn version is 21.03

NetA(geneve) 192.168.10.0/24
external NetB(flat) 100.7.50.0/24

VM-A ip is 192.168.10.10 on host HostA
VM-B ip is 192.168.10.20 on host HostA

RouterA gateway network is NetB, NetA is internal interface.

Apply for a floating ip 100.7.50.236, configure port forwarding
100.7.50.236:22 -> 192.168.10.20:22

when RouterA's Gateway Chassis is HostA, in VM-A can not ssh
100.7.50.236:22 to VM-B.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1935959

Title:
  [OVN]Unable to access port forwarding

Status in neutron:
  New

Bug description:
  ovn version is 21.03

  NetA(geneve) 192.168.10.0/24
  external NetB(flat) 100.7.50.0/24

  VM-A ip is 192.168.10.10 on host HostA
  VM-B ip is 192.168.10.20 on host HostA

  RouterA gateway network is NetB, NetA is internal interface.

  Apply for a floating ip 100.7.50.236, configure port forwarding
  100.7.50.236:22 -> 192.168.10.20:22

  when RouterA's Gateway Chassis is HostA, in VM-A can not ssh
  100.7.50.236:22 to VM-B.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1935959/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1934420] [NEW] [OVN]ControllerAgent cannot be changed to ControllerGatewayAgent dynamically

2021-07-01 Thread ZhouHeng
Public bug reported:

I have 3 nodes. NodeA, NodeB, NodeC.

I execute cmd 'ovs-vsctl set open_vswitch . external-ids:ovn-cms-options
=enable-chassis-as-gw' in NodeA and NodeV, as gateway Node.

show agent list by executing the command 'openstack network agent list':

NodeA is 'OVN Controller Gatewaty agent'
NodeB is 'OVN Controller Gatewaty agent'
NodeC is 'OVN Controller agent'

the result is the same as I expected.

next I execute cmd 'ovs-vsctl set open_vswitch . external-ids:ovn-cms-
options=enable-chassis-as-gw' in NodeC,

show agent list by executing the command 'openstack network agent list':

NodeA is 'OVN Controller Gatewaty agent'
NodeB is 'OVN Controller Gatewaty agent'
NodeC is 'OVN Controller agent'

The NodeC is still 'OVN Controller agent', not 'OVN Controller Gatewaty
agent', The command has been executed many times and the result is still
unchanged.

But as long as I execute the command to restart the neutron service, the
result will be correct. NodeC is 'OVN Controller Gatewaty agent'.

Similarly, the 'OVN Controller Gatewaty agent' cannot be turned into
'OVN Controller agent' without restarting the neutron service.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934420

Title:
  [OVN]ControllerAgent cannot be changed to ControllerGatewayAgent
  dynamically

Status in neutron:
  New

Bug description:
  I have 3 nodes. NodeA, NodeB, NodeC.

  I execute cmd 'ovs-vsctl set open_vswitch . external-ids:ovn-cms-
  options=enable-chassis-as-gw' in NodeA and NodeV, as gateway Node.

  show agent list by executing the command 'openstack network agent
  list':

  NodeA is 'OVN Controller Gatewaty agent'
  NodeB is 'OVN Controller Gatewaty agent'
  NodeC is 'OVN Controller agent'

  the result is the same as I expected.

  next I execute cmd 'ovs-vsctl set open_vswitch . external-ids:ovn-cms-
  options=enable-chassis-as-gw' in NodeC,

  show agent list by executing the command 'openstack network agent
  list':

  NodeA is 'OVN Controller Gatewaty agent'
  NodeB is 'OVN Controller Gatewaty agent'
  NodeC is 'OVN Controller agent'

  The NodeC is still 'OVN Controller agent', not 'OVN Controller
  Gatewaty agent', The command has been executed many times and the
  result is still unchanged.

  But as long as I execute the command to restart the neutron service,
  the result will be correct. NodeC is 'OVN Controller Gatewaty agent'.

  Similarly, the 'OVN Controller Gatewaty agent' cannot be turned into
  'OVN Controller agent' without restarting the neutron service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1934420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1933401] [NEW] [OVN]The type of ovn controller is not recognized as a gateway agent

2021-06-23 Thread ZhouHeng
Public bug reported:

neutron use ovn as mechanism_driver
ovn version is 21.03

I have a gateway node, and set this node as gateway node use "ovs-vsctl
set open_vswitch . external_ids:ovn-cms-options=enable-chassis-as-gw",
restart ovn-controller,

but I use "openstack network agent list" to see agents, agent type still
"OVN Controller agnet" not "OVN Controller Gateway agent"

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1933401

Title:
  [OVN]The type of ovn controller is not recognized as a gateway agent

Status in neutron:
  In Progress

Bug description:
  neutron use ovn as mechanism_driver
  ovn version is 21.03

  I have a gateway node, and set this node as gateway node use "ovs-
  vsctl set open_vswitch . external_ids:ovn-cms-options=enable-chassis-
  as-gw", restart ovn-controller,

  but I use "openstack network agent list" to see agents, agent type
  still "OVN Controller agnet" not "OVN Controller Gateway agent"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1933401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910623] [NEW] neutron api worker process can not desc like "neutron-server: api wokers"

2021-01-07 Thread ZhouHeng
Public bug reported:

According to the description of the
patch(https://review.opendev.org/c/openstack/neutron/+/637019), every
neutron-server process should names to their role, like:

25355 ?Ss 0:26 /usr/bin/python /usr/local/bin/neutron-server \
  --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
25368 ?S  0:00 neutron-server: api worker
25369 ?S  0:00 neutron-server: api worker
25370 ?S  0:00 neutron-server: api worker
25371 ?S  0:00 neutron-server: api worker
25372 ?S  0:02 neutron-server: rpc worker
25373 ?S  0:02 neutron-server: rpc worker
25374 ?S  0:02 neutron-server: services worker

but my devstack environment is:
root@ubuntu:~# ps aux|grep neutron-server
stack 615922  0.0  0.3 160508 71640 ?Ss2020   4:32 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
stack 615940  0.2  0.6 268616 126284 ?   Sl2020  53:14 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
stack 615941  0.2  0.5 262176 119792 ?   Sl2020  45:56 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
stack 615942  0.5  0.5 408828 114948 ?   Sl2020 124:27 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)
stack 615943  0.2  0.5 262628 106540 ?   Sl2020  63:14 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)

rpc workers process can display the name normally, api workers process
does not display the process name.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1910623

Title:
  neutron api worker process can not desc like "neutron-server: api
  wokers"

Status in neutron:
  New

Bug description:
  According to the description of the
  patch(https://review.opendev.org/c/openstack/neutron/+/637019), every
  neutron-server process should names to their role, like:

  25355 ?Ss 0:26 /usr/bin/python /usr/local/bin/neutron-server \
--config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  25368 ?S  0:00 neutron-server: api worker
  25369 ?S  0:00 neutron-server: api worker
  25370 ?S  0:00 neutron-server: api worker
  25371 ?S  0:00 neutron-server: api worker
  25372 ?S  0:02 neutron-server: rpc worker
  25373 ?S  0:02 neutron-server: rpc worker
  25374 ?S  0:02 neutron-server: services worker

  but my devstack environment is:
  root@ubuntu:~# ps aux|grep neutron-server
  stack 615922  0.0  0.3 160508 71640 ?Ss2020   4:32 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  stack 615940  0.2  0.6 268616 126284 ?   Sl2020  53:14 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  stack 615941  0.2  0.5 262176 119792 ?   Sl2020  45:56 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  stack 615942  0.5  0.5 408828 114948 ?   Sl2020 124:27 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)
  stack 615943  0.2  0.5 262628 106540 ?   Sl2020  63:14 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)

  rpc workers process can display the name normally, api workers process
  does not display the process name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1910623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910334] [NEW] [OVN]after create port forwarding, floating ip status is not running

2021-01-05 Thread ZhouHeng
Public bug reported:

env
mechanism_drivers = ovn
service_plugins = ovn-router

after create port forwaring, floating ip status is still down.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1910334

Title:
  [OVN]after create port forwarding, floating ip status is not running

Status in neutron:
  New

Bug description:
  env
  mechanism_drivers = ovn
  service_plugins = ovn-router

  after create port forwaring, floating ip status is still down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1910334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1893314] [NEW] set neutron quota not take effect

2020-08-28 Thread ZhouHeng
Public bug reported:

After create a new project, set the project's quota concurrently, eg: port=100
Enter the neutron db, can found multiple quota records about this project and 
resource=port.

After that, we set the quota for this project, the quota value returned
each time remains unchanged.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1893314

Title:
  set neutron quota not  take effect

Status in neutron:
  New

Bug description:
  After create a new project, set the project's quota concurrently, eg: port=100
  Enter the neutron db, can found multiple quota records about this project and 
resource=port.

  After that, we set the quota for this project, the quota value
  returned each time remains unchanged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1893314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1875849] [NEW] not ensure default security group exists when filter by project_id

2020-04-29 Thread ZhouHeng
Public bug reported:

when we filter security groups by 'tenant_id', neutron will ensure this project 
has default security group.
when we filter by 'project_id', neutron not ensure this project has default 
security group. 
if it is new project, filter by 'tenant_id' will return one security group. 
filter by 'project_id' will return empty list.

** Affects: neutron
 Importance: Medium
 Assignee: ZhouHeng (zhouhenglc)
 Status: In Progress


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1875849

Title:
  not ensure default security group exists when filter by project_id

Status in neutron:
  In Progress

Bug description:
  when we filter security groups by 'tenant_id', neutron will ensure this 
project has default security group.
  when we filter by 'project_id', neutron not ensure this project has default 
security group. 
  if it is new project, filter by 'tenant_id' will return one security group. 
filter by 'project_id' will return empty list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1875849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1873708] [NEW] Floating IP was not removed from the router when the port forwarding deletion was completed

2020-04-19 Thread ZhouHeng
Public bug reported:

1. Create a floating IP A
2. Create port forwarding on floating IP A
3. Delete port forwarding
4. Add the same port forwarding again

After step 3, It can be found that floating IP is still in the router network 
namespace.
After step 4, the floating IP's status still DOWN

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1873708

Title:
  Floating IP was not removed from the router when the port forwarding
  deletion was completed

Status in neutron:
  New

Bug description:
  1. Create a floating IP A
  2. Create port forwarding on floating IP A
  3. Delete port forwarding
  4. Add the same port forwarding again

  After step 3, It can be found that floating IP is still in the router network 
namespace.
  After step 4, the floating IP's status still DOWN

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1873708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870723] [NEW] vm cannot be created after a large number of vms are removed from the same node

2020-04-03 Thread ZhouHeng
Public bug reported:

vm cannot be created after a large number of virtual machines are
removed from the same node

1、create a security group, has remote security group rule
2、create 50 vms in same node(eg: node01) and use same security group created in 
step 1
3、delete all vms created in step 2
4、create some vms in nodeo1
5、all vms create failed. error is: Build instance  aborted: Failed to 
allocate the network(s), ...


By observing the database when creating the vm, it is found that port's L2 
block has not been removed.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

** Summary changed:

- vm cannot be created after a large number of virtual machines are removed 
from the same node
+ vm cannot be created after a large number of vms are removed from the same 
node

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870723

Title:
  vm cannot be created after a large number of vms are removed from the
  same node

Status in neutron:
  New

Bug description:
  vm cannot be created after a large number of virtual machines are
  removed from the same node

  1、create a security group, has remote security group rule
  2、create 50 vms in same node(eg: node01) and use same security group created 
in step 1
  3、delete all vms created in step 2
  4、create some vms in nodeo1
  5、all vms create failed. error is: Build instance  aborted: Failed to 
allocate the network(s), ...

  
  By observing the database when creating the vm, it is found that port's L2 
block has not been removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1870723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1845900] [NEW] ha router appear double vip

2019-09-29 Thread ZhouHeng
Public bug reported:

Environment:
Deploy with kolla has three controler.

Action:
select one router, and router's vip in control01.
execute docker stop neutron_l3_agent in control01

Found:
control01 and control02 both have vip.


I suspect that the neutron account does not have permission to send a signal to 
keepalived. causes keepalived to withdraw forcibly.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845900

Title:
  ha router appear double vip

Status in neutron:
  New

Bug description:
  Environment:
  Deploy with kolla has three controler.

  Action:
  select one router, and router's vip in control01.
  execute docker stop neutron_l3_agent in control01

  Found:
  control01 and control02 both have vip.

  
  I suspect that the neutron account does not have permission to send a signal 
to keepalived. causes keepalived to withdraw forcibly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1845900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838587] [NEW] request neutron with Incorrect body key return 500

2019-07-31 Thread ZhouHeng
eutron/neutron/pecan_wsgi/controllers/utils.py", line 76, in 
wrapped
Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*args, **kwargs)
Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/pecan_wsgi/controllers/resource.py", line 62, in put
Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation resources = 
request.context['resources']
Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation KeyError: 'resources'

** Affects: neutron
 Importance: Undecided
 Assignee: ZhouHeng (zhouhenglc)
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838587

Title:
  request neutron with Incorrect body key return 500

Status in neutron:
  New

Bug description:
  In current neutron, when I update resource with incorrect body,
  neutron server return 500 NeutronError. It should be fixed
  400(BadRequest)

  example:
  PUT /v2.0/networks/
  body:
  {
  "subnet": {
  ...
   }
  }
  neutron server return 
  {"NeutronError": {"message": "Request Failed: internal server error while 
processing your request.", "type": "HTTPInternalServerError", "detail": ""}}

  
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/pecan/core.py", line 683, in __call__
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation self.invoke_controller(controller, 
args, kwargs, state)
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/pecan/core.py", line 574, in invoke_controller
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation result = controller(*args, **kwargs)
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/neutron_lib/db/api.py", line 139, in wrapped
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation setattr(e, '_RETRY_EXCEEDED', True)
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation self.force_reraise()
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation six.reraise(self.type_, self.value, 
self.tb)
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/neutron_lib/db/api.py", line 135, in wrapped
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*args, **kwargs)
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 154, in wrapper
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation ectxt.value = e.inner_exc
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation self.force_reraise()
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation six.reraise(self.type_, self.value, 
self.tb)
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 142, in wrapper
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*args, **kwargs)
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/site-packages/neutron_lib/db/api.py", line 183, in wrapped
  Jul 31 20:56:08 nfs neutron-server[11250]: ERROR 
neutron.pecan_wsgi.hooks.translation LOG.debug("Retry wrapper got retriable 
exception: %s", e)
  Jul 

[Yahoo-eng-team] [Bug 1838487] [NEW] network-ip-availabilities api return data is incorrect

2019-07-30 Thread ZhouHeng
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2019-07-29 13:09:43
SHA: aaae83acaac5f58918ff53613b4de20bbfe30d81
Source: 
https://opendev.org/openstack/neutron-lib/src/api-ref/source/v2/index.rst
URL: https://docs.openstack.org/api-ref/network/v2/index.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838487

Title:
  network-ip-availabilities api return data is incorrect

Status in neutron:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-07-29 13:09:43
  SHA: aaae83acaac5f58918ff53613b4de20bbfe30d81
  Source: 
https://opendev.org/openstack/neutron-lib/src/api-ref/source/v2/index.rst
  URL: https://docs.openstack.org/api-ref/network/v2/index.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1838487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838396] [NEW] update port receive 500

2019-07-30 Thread ZhouHeng
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2019-07-29 13:09:43
SHA: aaae83acaac5f58918ff53613b4de20bbfe30d81
Source: 
https://opendev.org/openstack/neutron-lib/src/api-ref/source/v2/index.rst
URL: https://docs.openstack.org/api-ref/network/v2/index.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838396

Title:
  update port receive 500

Status in neutron:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-07-29 13:09:43
  SHA: aaae83acaac5f58918ff53613b4de20bbfe30d81
  Source: 
https://opendev.org/openstack/neutron-lib/src/api-ref/source/v2/index.rst
  URL: https://docs.openstack.org/api-ref/network/v2/index.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1838396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp