Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2023-11-16 Thread Terry Wilson
On Tue, Jan 24, 2023 at 8:00 AM Ilya Maximets  wrote:
>
> On 1/24/23 14:12, Vladislav Odintsov wrote:
> > Hi Ilya,
> >
> > could you please take a look on this?
> > Maybe you can advice any direction how to investigate this issue?
> >
> > Thanks in advance.
> >
> > Regards,
> > Vladislav Odintsov
> >
> >> On 24 Nov 2022, at 21:10, Anton Vazhnetsov  wrote:
> >>
> >> Hi, Terry!
> >>
> >> In continuation to our yesterday’s conversation [0], we were able to 
> >> reproduce the issue with KeyError. We found that the problem is not 
> >> connected with ovsdb-server load but it appears when the ovsdb-server 
> >> schema is converted online (it even doesn’t matter whether the real ovs 
> >> schema is changed) while the active connection persists.
> >> Please use next commands to reproduce it:
> >>
> >> # in terminal 1
> >>
> >> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
> >> ovsdb-server --remote punix://$(pwd)/ovs.sock  
> >> $(pwd)/ovs.db -vconsole:dbg
> >>
> >>
> >> # in terminal 2. run python shell
> >> python3
> >> # setup connection
> >> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
> >> from ovsdbapp.backend.ovs_idl import connection
> >>
> >> remote = "unix:///"
> >>
> >> def get_idl():
> >>"""Connection getter."""
> >>
> >>idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
> >>  leader_only=False)
> >>return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
> >>
> >> idl = get_idl()
> >>
> >>
> >> # in terminal 1
> >> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
> >>
> >> # in terminal 2 python shell:
> >> idl.ls_add("test").execute()
> >>
> >>
> >> We get following traceback:
> >>
> >> Traceback (most recent call last):
> >>  File 
> >> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", 
> >> line 131, in run
> >>txn.results.put(txn.do_commit())
> >>  File 
> >> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
> >>  line 143, in do_commit
> >>self.post_commit(txn)
> >>  File 
> >> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
> >>  line 73, in post_commit
> >>command.post_commit(txn)
> >>  File 
> >> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
> >> line 94, in post_commit
> >>row = self.api.tables[self.table_name].rows[real_uuid]
> >>  File "/usr/lib64/python3.6/collections/__init__.py", line 991, in 
> >> __getitem__
> >>raise KeyError(key)
> >> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331')
> >>
> >> In ovsdb-server debug logs we see that update2 or update3 messages are not 
> >> sent from server in response to client’s transaction, just reply with 
> >> result UUID:
> >> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
> >> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
> >> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
> >> method="transact", 
> >> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
> >>  id=5
> >> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
> >> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
> >>
> >> We checked same with ovn-nbctl running in daemon mode and found that the 
> >> problem is not reproduced (ovsdb-server after database conversion sends 
> >> out update3 message to ovn-nbctl daemon process in response to 
> >> transaction, for example ovs-appctl -t  run ls-add 
> >> test-ls):
>
> If the update3 is not sent, it means that the client doesn't monitor
> this table.  You need to look at monitor requests sent and replied
> to figure out the difference.
>
> Since the database conversion is involved, server will close all
> monitors on schema update and notify all clients that are db_change_aware.
> Clients should re-send monitor requests at this point.  If clients
> are not db_change_aware, server will just disconnect them, so they
> can re-connect and see the new schema and send new monitor requests.
>
> On a quick glance I don't see python idl handling 'monitor_canceled'
> notification.  That is most likely a root cause.  Python IDL claims
> to be db_change_aware, but it doesn't seem to be.
>

Reviving a very old thread, but yes this is definitely the issue. I
just tested and see that we get the monitor_canceled and do not do
anything with it. Parsing it and calling self.force_reconnect() seems
to fix the issue. I can send a patch shortly.

Terry


> Best regards, Ilya Maximets.
>
> >> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
> >> method="transact", 
> >> params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
> >>  run ls-add test-ls","op":"comment"}], id=5
> >> 2022-11-24T17:54:51Z|00624|json

Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2023-01-24 Thread Vladislav Odintsov
Thanks Ilya for the quick and useful response!
We’ll dig into monitor/db_change_aware logic.

Regards,
Vladislav Odintsov

> On 24 Jan 2023, at 17:00, Ilya Maximets  wrote:
> 
> On 1/24/23 14:12, Vladislav Odintsov wrote:
>> Hi Ilya,
>> 
>> could you please take a look on this?
>> Maybe you can advice any direction how to investigate this issue?
>> 
>> Thanks in advance.
>> 
>> Regards,
>> Vladislav Odintsov
>> 
>>> On 24 Nov 2022, at 21:10, Anton Vazhnetsov  wrote:
>>> 
>>> Hi, Terry!
>>> 
>>> In continuation to our yesterday’s conversation [0], we were able to 
>>> reproduce the issue with KeyError. We found that the problem is not 
>>> connected with ovsdb-server load but it appears when the ovsdb-server 
>>> schema is converted online (it even doesn’t matter whether the real ovs 
>>> schema is changed) while the active connection persists.
>>> Please use next commands to reproduce it:
>>> 
>>> # in terminal 1
>>> 
>>> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
>>> ovsdb-server --remote punix://$(pwd)/ovs.sock  
>>> $(pwd)/ovs.db -vconsole:dbg
>>> 
>>> 
>>> # in terminal 2. run python shell
>>> python3
>>> # setup connection
>>> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
>>> from ovsdbapp.backend.ovs_idl import connection
>>> 
>>> remote = "unix:///"
>>> 
>>> def get_idl():
>>>"""Connection getter."""
>>> 
>>>idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
>>>  leader_only=False)
>>>return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
>>> 
>>> idl = get_idl()
>>> 
>>> 
>>> # in terminal 1
>>> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
>>> 
>>> # in terminal 2 python shell:
>>> idl.ls_add("test").execute()
>>> 
>>> 
>>> We get following traceback:
>>> 
>>> Traceback (most recent call last):
>>>  File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", 
>>> line 131, in run
>>>txn.results.put(txn.do_commit())
>>>  File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
>>> line 143, in do_commit
>>>self.post_commit(txn)
>>>  File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
>>> line 73, in post_commit
>>>command.post_commit(txn)
>>>  File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
>>> line 94, in post_commit
>>>row = self.api.tables[self.table_name].rows[real_uuid]
>>>  File "/usr/lib64/python3.6/collections/__init__.py", line 991, in 
>>> __getitem__
>>>raise KeyError(key)
>>> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331') 
>>> 
>>> In ovsdb-server debug logs we see that update2 or update3 messages are not 
>>> sent from server in response to client’s transaction, just reply with 
>>> result UUID:
>>> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
>>> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
>>> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
>>> method="transact", 
>>> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
>>>  id=5
>>> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
>>> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
>>> 
>>> We checked same with ovn-nbctl running in daemon mode and found that the 
>>> problem is not reproduced (ovsdb-server after database conversion sends out 
>>> update3 message to ovn-nbctl daemon process in response to transaction, for 
>>> example ovs-appctl -t  run ls-add test-ls):
> 
> If the update3 is not sent, it means that the client doesn't monitor
> this table.  You need to look at monitor requests sent and replied
> to figure out the difference.
> 
> Since the database conversion is involved, server will close all
> monitors on schema update and notify all clients that are db_change_aware.
> Clients should re-send monitor requests at this point.  If clients
> are not db_change_aware, server will just disconnect them, so they
> can re-connect and see the new schema and send new monitor requests.
> 
> On a quick glance I don't see python idl handling 'monitor_canceled'
> notification.  That is most likely a root cause.  Python IDL claims
> to be db_change_aware, but it doesn't seem to be.
> 
> Best regards, Ilya Maximets.
> 
>>> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
>>> method="transact", 
>>> params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
>>>  run ls-add test-ls","op":"comment"}], id=5
>>> 2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
>>> method="update3", 
>>> params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"tes

Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2023-01-24 Thread Ilya Maximets
On 1/24/23 14:12, Vladislav Odintsov wrote:
> Hi Ilya,
> 
> could you please take a look on this?
> Maybe you can advice any direction how to investigate this issue?
> 
> Thanks in advance.
> 
> Regards,
> Vladislav Odintsov
> 
>> On 24 Nov 2022, at 21:10, Anton Vazhnetsov  wrote:
>>
>> Hi, Terry!
>>
>> In continuation to our yesterday’s conversation [0], we were able to 
>> reproduce the issue with KeyError. We found that the problem is not 
>> connected with ovsdb-server load but it appears when the ovsdb-server schema 
>> is converted online (it even doesn’t matter whether the real ovs schema is 
>> changed) while the active connection persists.
>> Please use next commands to reproduce it:
>>
>> # in terminal 1
>>
>> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
>> ovsdb-server --remote punix://$(pwd)/ovs.sock  
>> $(pwd)/ovs.db -vconsole:dbg
>>
>>
>> # in terminal 2. run python shell
>> python3
>> # setup connection
>> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
>> from ovsdbapp.backend.ovs_idl import connection
>>
>> remote = "unix:///"
>>
>> def get_idl():
>>    """Connection getter."""
>>
>>    idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
>>  leader_only=False)
>>    return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
>>
>> idl = get_idl()
>>
>>
>> # in terminal 1
>> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
>>
>> # in terminal 2 python shell:
>> idl.ls_add("test").execute()
>>
>>
>> We get following traceback:
>>
>> Traceback (most recent call last):
>>  File 
>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", 
>> line 131, in run
>>    txn.results.put(txn.do_commit())
>>  File 
>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
>> line 143, in do_commit
>>    self.post_commit(txn)
>>  File 
>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
>> line 73, in post_commit
>>    command.post_commit(txn)
>>  File 
>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", line 
>> 94, in post_commit
>>    row = self.api.tables[self.table_name].rows[real_uuid]
>>  File "/usr/lib64/python3.6/collections/__init__.py", line 991, in 
>> __getitem__
>>    raise KeyError(key)
>> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331') 
>>
>> In ovsdb-server debug logs we see that update2 or update3 messages are not 
>> sent from server in response to client’s transaction, just reply with result 
>> UUID:
>> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
>> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
>> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
>> method="transact", 
>> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
>>  id=5
>> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
>> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
>>
>> We checked same with ovn-nbctl running in daemon mode and found that the 
>> problem is not reproduced (ovsdb-server after database conversion sends out 
>> update3 message to ovn-nbctl daemon process in response to transaction, for 
>> example ovs-appctl -t  run ls-add test-ls):

If the update3 is not sent, it means that the client doesn't monitor
this table.  You need to look at monitor requests sent and replied
to figure out the difference.

Since the database conversion is involved, server will close all
monitors on schema update and notify all clients that are db_change_aware.
Clients should re-send monitor requests at this point.  If clients
are not db_change_aware, server will just disconnect them, so they
can re-connect and see the new schema and send new monitor requests.

On a quick glance I don't see python idl handling 'monitor_canceled'
notification.  That is most likely a root cause.  Python IDL claims
to be db_change_aware, but it doesn't seem to be.

Best regards, Ilya Maximets.

>> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
>> method="transact", 
>> params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
>>  run ls-add test-ls","op":"comment"}], id=5
>> 2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
>> method="update3", 
>> params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"test-ls"]
>> 2022-11-24T17:54:51Z|00625|jsonrpc|DBG|unix#7: send reply, 
>> result=[{"uuid":["uuid","0b147f2c-248d-496a-b718-a5328d3c2995"]},{}], id=5
>>
>> So it seems that the problem is in python-ovs, not in ovsdb-server.
>> We tested with ovsdb-server running version 2.17.3 and python-ovs 2.13.5 and 
>> also python-ovs 2.17

Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2023-01-24 Thread Vladislav Odintsov
Hi Ilya,

could you please take a look on this?
Maybe you can advice any direction how to investigate this issue?

Thanks in advance.

Regards,
Vladislav Odintsov

> On 24 Nov 2022, at 21:10, Anton Vazhnetsov  wrote:
> 
> Hi, Terry!
> 
> In continuation to our yesterday’s conversation [0], we were able to 
> reproduce the issue with KeyError. We found that the problem is not connected 
> with ovsdb-server load but it appears when the ovsdb-server schema is 
> converted online (it even doesn’t matter whether the real ovs schema is 
> changed) while the active connection persists. 
> Please use next commands to reproduce it:
> 
> # in terminal 1
> 
> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
> ovsdb-server --remote punix://$(pwd)/ovs.sock $(pwd)/ovs.db -vconsole:dbg
> 
> 
> # in terminal 2. run python shell
> python3
> # setup connection
> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
> from ovsdbapp.backend.ovs_idl import connection
> 
> remote = "unix:///"
> 
> def get_idl():
>"""Connection getter."""
> 
>idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
>  leader_only=False)
>return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
> 
> idl = get_idl()
> 
> 
> # in terminal 1
> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
> 
> # in terminal 2 python shell:
> idl.ls_add("test").execute()
> 
> 
> We get following traceback:
> 
> Traceback (most recent call last):
>  File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", 
> line 131, in run
>txn.results.put(txn.do_commit())
>  File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
> line 143, in do_commit
>self.post_commit(txn)
>  File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
> line 73, in post_commit
>command.post_commit(txn)
>  File "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
> line 94, in post_commit
>row = self.api.tables[self.table_name].rows[real_uuid]
>  File "/usr/lib64/python3.6/collections/__init__.py", line 991, in __getitem__
>raise KeyError(key)
> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331') 
> 
> In ovsdb-server debug logs we see that update2 or update3 messages are not 
> sent from server in response to client’s transaction, just reply with result 
> UUID:
> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
> method="transact", 
> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
>  id=5
> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
> 
> We checked same with ovn-nbctl running in daemon mode and found that the 
> problem is not reproduced (ovsdb-server after database conversion sends out 
> update3 message to ovn-nbctl daemon process in response to transaction, for 
> example ovs-appctl -t  run ls-add test-ls):
> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
> method="transact", 
> params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
>  run ls-add test-ls","op":"comment"}], id=5
> 2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
> method="update3", 
> params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"test-ls"]
> 2022-11-24T17:54:51Z|00625|jsonrpc|DBG|unix#7: send reply, 
> result=[{"uuid":["uuid","0b147f2c-248d-496a-b718-a5328d3c2995"]},{}], id=5
> 
> So it seems that the problem is in python-ovs, not in ovsdb-server.
> We tested with ovsdb-server running version 2.17.3 and python-ovs 2.13.5 and 
> also python-ovs 2.17.3, the behaviour is the same.
> 
> Do you have any ideas what can be a reason for such behaviour?
> 
> 0: 
> https://review.opendev.org/c/openstack/ovsdbapp/+/865454/comments/674c57e6_3849591b
> 
> Regards, Anton.

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-12-02 Thread Vladislav Odintsov
Should I not have sent patches as RFC? I thought that RFC tag is used when it 
is needed to have some conversation/advices about the pending changes.
Anyway, as you requested, I’ve send a normal patch series:
https://patchwork.ozlabs.org/project/ovn/cover/20221202173147.3032702-1-odiv...@gmail.com/

Regards,
Vladislav Odintsov

> On 2 Dec 2022, at 19:05, Numan Siddique  wrote:
> 
> On Fri, Dec 2, 2022 at 8:27 AM Vladislav Odintsov  > wrote:
>> 
>> Hi Numan,
>> 
>> only a part of those patched supposed to be applied. Another part present in 
>> the RFC just to show some PoC/idea, should I repost all of the patches?
> 
> I'd say yes since you've marked the patches as RFC.
> 
> Numan
> 
>> 
>> Regards,
>> Vladislav Odintsov
>> 
>>> On 2 Dec 2022, at 00:20, Numan Siddique  wrote:
>>> 
>>> On Thu, Dec 1, 2022 at 3:58 PM Vladislav Odintsov >> > wrote:
 
 Hi,
 
 is it possible to consider any of the problems written below and here [0] 
 for the possible fixes to be included in upcoming OVN/OVS releases?
>>> 
>>> Hi,
>>> 
>>> I didn't get a chance to look at the patches.  But if some of them are
>>> fixing any issues, we can definitely backport them,
>>> 
>>> I'd suggest removing the RFC tag and reposting the patches.
>>> 
>>> Thanks
>>> Numan
>>> 
 
 Thanks.
 
 0: 
 https://patchwork.ozlabs.org/project/ovn/cover/20221118162050.3019353-1-odiv...@gmail.com/
 
 Regards,
 Vladislav Odintsov
 
> On 24 Nov 2022, at 20:57, Anton Vazhnetsov  wrote:
> 
> Hi, Terry!
> 
> In continuation to our yesterday’s conversation [0], we were able to 
> reproduce the issue with KeyError. We found that the problem is not 
> connected with ovsdb-server load but it appears when the ovsdb-server 
> schema is converted online (it even doesn’t matter whether the real ovs 
> schema is changed) while the active connection persists.
> Please use next commands to reproduce it:
> 
> # in terminal 1
> 
> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
> ovsdb-server --remote punix://$(pwd)/ovs.sock $(pwd)/ovs.db -vconsole:dbg
> 
> 
> # in terminal 2. run python shell
> python3
> # setup connection
> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
> from ovsdbapp.backend.ovs_idl import connection
> 
> remote = "unix:///"
> 
> def get_idl():
>  """Connection getter."""
> 
>  idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
>leader_only=False)
>  return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
> 
> idl = get_idl()
> 
> 
> # in terminal 1
> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
> 
> # in terminal 2 python shell:
> idl.ls_add("test").execute()
> 
> 
> We get following traceback:
> 
> Traceback (most recent call last):
> File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
>  line 131, in run
>  txn.results.put(txn.do_commit())
> File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
>  line 143, in do_commit
>  self.post_commit(txn)
> File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
>  line 73, in post_commit
>  command.post_commit(txn)
> File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
> line 94, in post_commit
>  row = self.api.tables[self.table_name].rows[real_uuid]
> File "/usr/lib64/python3.6/collections/__init__.py", line 991, in 
> __getitem__
>  raise KeyError(key)
> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331')
> 
> In ovsdb-server debug logs we see that update2 or update3 messages are 
> not sent from server in response to client’s transaction, just reply with 
> result UUID:
> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
> method="transact", 
> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
>  id=5
> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
> 
> We checked same with ovn-nbctl running in daemon mode and found that the 
> problem is not reproduced (ovsdb-server after database conversion sends 
> out update3 message to ovn-nbctl daemon process in response to 
> transaction, for example ovs-appctl -t  run ls-add 
> test-ls):
> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
> method="transa

Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-12-02 Thread Numan Siddique
On Fri, Dec 2, 2022 at 8:27 AM Vladislav Odintsov  wrote:
>
> Hi Numan,
>
> only a part of those patched supposed to be applied. Another part present in 
> the RFC just to show some PoC/idea, should I repost all of the patches?

I'd say yes since you've marked the patches as RFC.

Numan

>
> Regards,
> Vladislav Odintsov
>
> > On 2 Dec 2022, at 00:20, Numan Siddique  wrote:
> >
> > On Thu, Dec 1, 2022 at 3:58 PM Vladislav Odintsov  > > wrote:
> >>
> >> Hi,
> >>
> >> is it possible to consider any of the problems written below and here [0] 
> >> for the possible fixes to be included in upcoming OVN/OVS releases?
> >
> > Hi,
> >
> > I didn't get a chance to look at the patches.  But if some of them are
> > fixing any issues, we can definitely backport them,
> >
> > I'd suggest removing the RFC tag and reposting the patches.
> >
> > Thanks
> > Numan
> >
> >>
> >> Thanks.
> >>
> >> 0: 
> >> https://patchwork.ozlabs.org/project/ovn/cover/20221118162050.3019353-1-odiv...@gmail.com/
> >>
> >> Regards,
> >> Vladislav Odintsov
> >>
> >>> On 24 Nov 2022, at 20:57, Anton Vazhnetsov  wrote:
> >>>
> >>> Hi, Terry!
> >>>
> >>> In continuation to our yesterday’s conversation [0], we were able to 
> >>> reproduce the issue with KeyError. We found that the problem is not 
> >>> connected with ovsdb-server load but it appears when the ovsdb-server 
> >>> schema is converted online (it even doesn’t matter whether the real ovs 
> >>> schema is changed) while the active connection persists.
> >>> Please use next commands to reproduce it:
> >>>
> >>> # in terminal 1
> >>>
> >>> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
> >>> ovsdb-server --remote punix://$(pwd)/ovs.sock $(pwd)/ovs.db -vconsole:dbg
> >>>
> >>>
> >>> # in terminal 2. run python shell
> >>> python3
> >>> # setup connection
> >>> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
> >>> from ovsdbapp.backend.ovs_idl import connection
> >>>
> >>> remote = "unix:///"
> >>>
> >>> def get_idl():
> >>>   """Connection getter."""
> >>>
> >>>   idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
> >>> leader_only=False)
> >>>   return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
> >>>
> >>> idl = get_idl()
> >>>
> >>>
> >>> # in terminal 1
> >>> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
> >>>
> >>> # in terminal 2 python shell:
> >>> idl.ls_add("test").execute()
> >>>
> >>>
> >>> We get following traceback:
> >>>
> >>> Traceback (most recent call last):
> >>> File 
> >>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
> >>>  line 131, in run
> >>>   txn.results.put(txn.do_commit())
> >>> File 
> >>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
> >>>  line 143, in do_commit
> >>>   self.post_commit(txn)
> >>> File 
> >>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
> >>>  line 73, in post_commit
> >>>   command.post_commit(txn)
> >>> File 
> >>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
> >>> line 94, in post_commit
> >>>   row = self.api.tables[self.table_name].rows[real_uuid]
> >>> File "/usr/lib64/python3.6/collections/__init__.py", line 991, in 
> >>> __getitem__
> >>>   raise KeyError(key)
> >>> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331')
> >>>
> >>> In ovsdb-server debug logs we see that update2 or update3 messages are 
> >>> not sent from server in response to client’s transaction, just reply with 
> >>> result UUID:
> >>> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
> >>> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
> >>> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
> >>> method="transact", 
> >>> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
> >>>  id=5
> >>> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
> >>> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
> >>>
> >>> We checked same with ovn-nbctl running in daemon mode and found that the 
> >>> problem is not reproduced (ovsdb-server after database conversion sends 
> >>> out update3 message to ovn-nbctl daemon process in response to 
> >>> transaction, for example ovs-appctl -t  run ls-add 
> >>> test-ls):
> >>> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
> >>> method="transact", 
> >>> params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
> >>>  run ls-add test-ls","op":"comment"}], id=5
> >>> 2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
> >>> method="update3", 
> >>> params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328

Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-12-02 Thread Vladislav Odintsov
Hi Numan,

only a part of those patched supposed to be applied. Another part present in 
the RFC just to show some PoC/idea, should I repost all of the patches?

Regards,
Vladislav Odintsov

> On 2 Dec 2022, at 00:20, Numan Siddique  wrote:
> 
> On Thu, Dec 1, 2022 at 3:58 PM Vladislav Odintsov  > wrote:
>> 
>> Hi,
>> 
>> is it possible to consider any of the problems written below and here [0] 
>> for the possible fixes to be included in upcoming OVN/OVS releases?
> 
> Hi,
> 
> I didn't get a chance to look at the patches.  But if some of them are
> fixing any issues, we can definitely backport them,
> 
> I'd suggest removing the RFC tag and reposting the patches.
> 
> Thanks
> Numan
> 
>> 
>> Thanks.
>> 
>> 0: 
>> https://patchwork.ozlabs.org/project/ovn/cover/20221118162050.3019353-1-odiv...@gmail.com/
>> 
>> Regards,
>> Vladislav Odintsov
>> 
>>> On 24 Nov 2022, at 20:57, Anton Vazhnetsov  wrote:
>>> 
>>> Hi, Terry!
>>> 
>>> In continuation to our yesterday’s conversation [0], we were able to 
>>> reproduce the issue with KeyError. We found that the problem is not 
>>> connected with ovsdb-server load but it appears when the ovsdb-server 
>>> schema is converted online (it even doesn’t matter whether the real ovs 
>>> schema is changed) while the active connection persists.
>>> Please use next commands to reproduce it:
>>> 
>>> # in terminal 1
>>> 
>>> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
>>> ovsdb-server --remote punix://$(pwd)/ovs.sock $(pwd)/ovs.db -vconsole:dbg
>>> 
>>> 
>>> # in terminal 2. run python shell
>>> python3
>>> # setup connection
>>> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
>>> from ovsdbapp.backend.ovs_idl import connection
>>> 
>>> remote = "unix:///"
>>> 
>>> def get_idl():
>>>   """Connection getter."""
>>> 
>>>   idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
>>> leader_only=False)
>>>   return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
>>> 
>>> idl = get_idl()
>>> 
>>> 
>>> # in terminal 1
>>> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
>>> 
>>> # in terminal 2 python shell:
>>> idl.ls_add("test").execute()
>>> 
>>> 
>>> We get following traceback:
>>> 
>>> Traceback (most recent call last):
>>> File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", 
>>> line 131, in run
>>>   txn.results.put(txn.do_commit())
>>> File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
>>> line 143, in do_commit
>>>   self.post_commit(txn)
>>> File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
>>> line 73, in post_commit
>>>   command.post_commit(txn)
>>> File 
>>> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
>>> line 94, in post_commit
>>>   row = self.api.tables[self.table_name].rows[real_uuid]
>>> File "/usr/lib64/python3.6/collections/__init__.py", line 991, in 
>>> __getitem__
>>>   raise KeyError(key)
>>> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331')
>>> 
>>> In ovsdb-server debug logs we see that update2 or update3 messages are not 
>>> sent from server in response to client’s transaction, just reply with 
>>> result UUID:
>>> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
>>> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
>>> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
>>> method="transact", 
>>> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
>>>  id=5
>>> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
>>> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
>>> 
>>> We checked same with ovn-nbctl running in daemon mode and found that the 
>>> problem is not reproduced (ovsdb-server after database conversion sends out 
>>> update3 message to ovn-nbctl daemon process in response to transaction, for 
>>> example ovs-appctl -t  run ls-add test-ls):
>>> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
>>> method="transact", 
>>> params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
>>>  run ls-add test-ls","op":"comment"}], id=5
>>> 2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
>>> method="update3", 
>>> params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"test-ls"]
>>> 2022-11-24T17:54:51Z|00625|jsonrpc|DBG|unix#7: send reply, 
>>> result=[{"uuid":["uuid","0b147f2c-248d-496a-b718-a5328d3c2995"]},{}], id=5
>>> 
>>> So it seems that the problem is in python-ovs, not in ovsdb-server.
>>> 
>>> Do you have any ideas what can be a reason for such behaviour?
>>> 
>>> 0: 

Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-12-01 Thread Numan Siddique
On Thu, Dec 1, 2022 at 3:58 PM Vladislav Odintsov  wrote:
>
> Hi,
>
> is it possible to consider any of the problems written below and here [0] for 
> the possible fixes to be included in upcoming OVN/OVS releases?

Hi,

I didn't get a chance to look at the patches.  But if some of them are
fixing any issues, we can definitely backport them,

I'd suggest removing the RFC tag and reposting the patches.

Thanks
Numan

>
> Thanks.
>
> 0: 
> https://patchwork.ozlabs.org/project/ovn/cover/20221118162050.3019353-1-odiv...@gmail.com/
>
> Regards,
> Vladislav Odintsov
>
> > On 24 Nov 2022, at 20:57, Anton Vazhnetsov  wrote:
> >
> > Hi, Terry!
> >
> > In continuation to our yesterday’s conversation [0], we were able to 
> > reproduce the issue with KeyError. We found that the problem is not 
> > connected with ovsdb-server load but it appears when the ovsdb-server 
> > schema is converted online (it even doesn’t matter whether the real ovs 
> > schema is changed) while the active connection persists.
> > Please use next commands to reproduce it:
> >
> > # in terminal 1
> >
> > ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
> > ovsdb-server --remote punix://$(pwd)/ovs.sock $(pwd)/ovs.db -vconsole:dbg
> >
> >
> > # in terminal 2. run python shell
> > python3
> > # setup connection
> > import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
> > from ovsdbapp.backend.ovs_idl import connection
> >
> > remote = "unix:///"
> >
> > def get_idl():
> >"""Connection getter."""
> >
> >idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
> >  leader_only=False)
> >return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
> >
> > idl = get_idl()
> >
> >
> > # in terminal 1
> > ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
> >
> > # in terminal 2 python shell:
> > idl.ls_add("test").execute()
> >
> >
> > We get following traceback:
> >
> > Traceback (most recent call last):
> >  File 
> > "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", 
> > line 131, in run
> >txn.results.put(txn.do_commit())
> >  File 
> > "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
> > line 143, in do_commit
> >self.post_commit(txn)
> >  File 
> > "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
> > line 73, in post_commit
> >command.post_commit(txn)
> >  File 
> > "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
> > line 94, in post_commit
> >row = self.api.tables[self.table_name].rows[real_uuid]
> >  File "/usr/lib64/python3.6/collections/__init__.py", line 991, in 
> > __getitem__
> >raise KeyError(key)
> > KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331')
> >
> > In ovsdb-server debug logs we see that update2 or update3 messages are not 
> > sent from server in response to client’s transaction, just reply with 
> > result UUID:
> > 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
> > (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
> > 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
> > method="transact", 
> > params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
> >  id=5
> > 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
> > result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
> >
> > We checked same with ovn-nbctl running in daemon mode and found that the 
> > problem is not reproduced (ovsdb-server after database conversion sends out 
> > update3 message to ovn-nbctl daemon process in response to transaction, for 
> > example ovs-appctl -t  run ls-add test-ls):
> > 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
> > method="transact", 
> > params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
> >  run ls-add test-ls","op":"comment"}], id=5
> > 2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
> > method="update3", 
> > params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"test-ls"]
> > 2022-11-24T17:54:51Z|00625|jsonrpc|DBG|unix#7: send reply, 
> > result=[{"uuid":["uuid","0b147f2c-248d-496a-b718-a5328d3c2995"]},{}], id=5
> >
> > So it seems that the problem is in python-ovs, not in ovsdb-server.
> >
> > Do you have any ideas what can be a reason for such behaviour?
> >
> > 0: 
> > https://review.opendev.org/c/openstack/ovsdbapp/+/865454/comments/674c57e6_3849591b
> >
> > Regards, Anton.
> > ___
> > dev mailing list
> > d...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
> ___
> d

Re: [ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-12-01 Thread Vladislav Odintsov
Hi,

is it possible to consider any of the problems written below and here [0] for 
the possible fixes to be included in upcoming OVN/OVS releases?

Thanks.

0: 
https://patchwork.ozlabs.org/project/ovn/cover/20221118162050.3019353-1-odiv...@gmail.com/

Regards,
Vladislav Odintsov

> On 24 Nov 2022, at 20:57, Anton Vazhnetsov  wrote:
> 
> Hi, Terry!
> 
> In continuation to our yesterday’s conversation [0], we were able to 
> reproduce the issue with KeyError. We found that the problem is not connected 
> with ovsdb-server load but it appears when the ovsdb-server schema is 
> converted online (it even doesn’t matter whether the real ovs schema is 
> changed) while the active connection persists. 
> Please use next commands to reproduce it:
> 
> # in terminal 1
> 
> ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
> ovsdb-server --remote punix://$(pwd)/ovs.sock $(pwd)/ovs.db -vconsole:dbg
> 
> 
> # in terminal 2. run python shell
> python3
> # setup connection
> import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
> from ovsdbapp.backend.ovs_idl import connection
> 
> remote = "unix:///"
> 
> def get_idl():
>"""Connection getter."""
> 
>idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
>  leader_only=False)
>return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))
> 
> idl = get_idl()
> 
> 
> # in terminal 1
> ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema
> 
> # in terminal 2 python shell:
> idl.ls_add("test").execute()
> 
> 
> We get following traceback:
> 
> Traceback (most recent call last):
>  File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", 
> line 131, in run
>txn.results.put(txn.do_commit())
>  File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
> line 143, in do_commit
>self.post_commit(txn)
>  File 
> "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
> line 73, in post_commit
>command.post_commit(txn)
>  File "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
> line 94, in post_commit
>row = self.api.tables[self.table_name].rows[real_uuid]
>  File "/usr/lib64/python3.6/collections/__init__.py", line 991, in __getitem__
>raise KeyError(key)
> KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331') 
> 
> In ovsdb-server debug logs we see that update2 or update3 messages are not 
> sent from server in response to client’s transaction, just reply with result 
> UUID:
> 2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
> (///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
> 2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
> method="transact", 
> params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
>  id=5
> 2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
> result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5
> 
> We checked same with ovn-nbctl running in daemon mode and found that the 
> problem is not reproduced (ovsdb-server after database conversion sends out 
> update3 message to ovn-nbctl daemon process in response to transaction, for 
> example ovs-appctl -t  run ls-add test-ls):
> 2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
> method="transact", 
> params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
>  run ls-add test-ls","op":"comment"}], id=5
> 2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
> method="update3", 
> params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"test-ls"]
> 2022-11-24T17:54:51Z|00625|jsonrpc|DBG|unix#7: send reply, 
> result=[{"uuid":["uuid","0b147f2c-248d-496a-b718-a5328d3c2995"]},{}], id=5
> 
> So it seems that the problem is in python-ovs, not in ovsdb-server.
> 
> Do you have any ideas what can be a reason for such behaviour?
> 
> 0: 
> https://review.opendev.org/c/openstack/ovsdbapp/+/865454/comments/674c57e6_3849591b
> 
> Regards, Anton.
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-11-24 Thread Anton Vazhnetsov
Hi, Terry!

In continuation to our yesterday’s conversation [0], we were able to reproduce 
the issue with KeyError. We found that the problem is not connected with 
ovsdb-server load but it appears when the ovsdb-server schema is converted 
online (it even doesn’t matter whether the real ovs schema is changed) while 
the active connection persists. 
Please use next commands to reproduce it:

# in terminal 1

ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
ovsdb-server --remote punix://$(pwd)/ovs.sock $(pwd)/ovs.db -vconsole:dbg


# in terminal 2. run python shell
python3
# setup connection
import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
from ovsdbapp.backend.ovs_idl import connection

remote = "unix:///"

def get_idl():
"""Connection getter."""

idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
  leader_only=False)
return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))

idl = get_idl()


# in terminal 1
ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema

# in terminal 2 python shell:
idl.ls_add("test").execute()


We get following traceback:

Traceback (most recent call last):
  File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
131, in run
txn.results.put(txn.do_commit())
  File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 143, in do_commit
self.post_commit(txn)
  File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 73, in post_commit
command.post_commit(txn)
  File "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
line 94, in post_commit
row = self.api.tables[self.table_name].rows[real_uuid]
  File "/usr/lib64/python3.6/collections/__init__.py", line 991, in __getitem__
raise KeyError(key)
KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331') 

In ovsdb-server debug logs we see that update2 or update3 messages are not sent 
from server in response to client’s transaction, just reply with result UUID:
2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
(///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
method="transact", 
params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
 id=5
2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5

We checked same with ovn-nbctl running in daemon mode and found that the 
problem is not reproduced (ovsdb-server after database conversion sends out 
update3 message to ovn-nbctl daemon process in response to transaction, for 
example ovs-appctl -t  run ls-add test-ls):
2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
method="transact", 
params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
 run ls-add test-ls","op":"comment"}], id=5
2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
method="update3", 
params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"test-ls"]
2022-11-24T17:54:51Z|00625|jsonrpc|DBG|unix#7: send reply, 
result=[{"uuid":["uuid","0b147f2c-248d-496a-b718-a5328d3c2995"]},{}], id=5

So it seems that the problem is in python-ovs, not in ovsdb-server.

Do you have any ideas what can be a reason for such behaviour?

0: 
https://review.opendev.org/c/openstack/ovsdbapp/+/865454/comments/674c57e6_3849591b

Regards, Anton.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-11-24 Thread Anton Vazhnetsov
Hi, Terry!

In continuation to our yesterday’s conversation [0], we were able to reproduce 
the issue with KeyError. We found that the problem is not connected with 
ovsdb-server load but it appears when the ovsdb-server schema is converted 
online (it even doesn’t matter whether the real ovs schema is changed) while 
the active connection persists. 
Please use next commands to reproduce it:

# in terminal 1

ovsdb-tool create ./ovs.db /usr/share/ovn/ovn-nb.ovsschema
ovsdb-server --remote punix://$(pwd)/ovs.sock  
$(pwd)/ovs.db -vconsole:dbg


# in terminal 2. run python shell
python3
# setup connection
import ovsdbapp.schema.ovn_northbound.impl_idl as nb_idl
from ovsdbapp.backend.ovs_idl import connection

remote = "unix:///"

def get_idl():
   """Connection getter."""

   idl = connection.OvsdbIdl.from_server(remote, "OVN_Northbound",
 leader_only=False)
   return nb_idl.OvnNbApiIdlImpl(connection.Connection(idl, 100))

idl = get_idl()


# in terminal 1
ovsdb-client convert unix:$(pwd)/ovs.sock /usr/share/ovn/ovn-nb.ovsschema

# in terminal 2 python shell:
idl.ls_add("test").execute()


We get following traceback:

Traceback (most recent call last):
 File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
131, in run
   txn.results.put(txn.do_commit())
 File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 143, in do_commit
   self.post_commit(txn)
 File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 73, in post_commit
   command.post_commit(txn)
 File "/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/command.py", 
line 94, in post_commit
   row = self.api.tables[self.table_name].rows[real_uuid]
 File "/usr/lib64/python3.6/collections/__init__.py", line 991, in __getitem__
   raise KeyError(key)
KeyError: UUID('0256afa4-6dd0-4c2c-b6a2-686a360ab331') 

In ovsdb-server debug logs we see that update2 or update3 messages are not sent 
from server in response to client’s transaction, just reply with result UUID:
2022-11-24T17:42:36Z|00306|poll_loop|DBG|wakeup due to [POLLIN] on fd 18 
(///root/ovsdb-problem/ovs.sock<->) at lib/stream-fd.c:157
2022-11-24T17:42:36Z|00307|jsonrpc|DBG|unix#5: received request, 
method="transact", 
params=["OVN_Northbound",{"uuid-name":"row03ef28d6_93f1_43bc_b07a_eae58d4bd1c5","table":"Logical_Switch","op":"insert","row":{"name”:"test"}}],
 id=5
2022-11-24T17:42:36Z|00308|jsonrpc|DBG|unix#5: send reply, 
result=[{"uuid":["uuid","4eb7c407-beec-46ca-b816-19f942e57721"]}], id=5

We checked same with ovn-nbctl running in daemon mode and found that the 
problem is not reproduced (ovsdb-server after database conversion sends out 
update3 message to ovn-nbctl daemon process in response to transaction, for 
example ovs-appctl -t  run ls-add test-ls):
2022-11-24T17:54:51Z|00623|jsonrpc|DBG|unix#7: received request, 
method="transact", 
params=["OVN_Northbound",{"uuid-name":"rowcdb152ce_a9af_4761_b965_708ad300fcb7","table":"Logical_Switch","op":"insert","row":{"name":"test-ls"}},{"comment":"ovn-nbctl:
 run ls-add test-ls","op":"comment"}], id=5
2022-11-24T17:54:51Z|00624|jsonrpc|DBG|unix#7: send notification, 
method="update3", 
params=[["monid","OVN_Northbound"],"----",{"Logical_Switch":{"0b147f2c-248d-496a-b718-a5328d3c2995":{"insert":{"name":"test-ls"]
2022-11-24T17:54:51Z|00625|jsonrpc|DBG|unix#7: send reply, 
result=[{"uuid":["uuid","0b147f2c-248d-496a-b718-a5328d3c2995"]},{}], id=5

So it seems that the problem is in python-ovs, not in ovsdb-server.
We tested with ovsdb-server running version 2.17.3 and python-ovs 2.13.5 and 
also python-ovs 2.17.3, the behaviour is the same.

Do you have any ideas what can be a reason for such behaviour?

0: 
https://review.opendev.org/c/openstack/ovsdbapp/+/865454/comments/674c57e6_3849591b
 


Regards, Anton.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [OVN RFC 0/7] OVN IC bugfixes & proposals/questions

2022-11-18 Thread Vladislav Odintsov
Hi,

we’ve met with an issue, where it was possible to create multiple similar
routes within LR (same ip_prefix, nexthop, and route table).  Initially
this was done using python ovsdbapp library, but the problem itself
touches OVN and even OVS.  Sorry for the long read, but it seems that
there are a couple of bugs in different places, part of which this RFC
used to cover.

How the issue was initially reproduced:

1. assume we have (at least) 2-Availability Zone OVN deployment
   (utilising ovn-ic infrastructure).
2. create transit switch in IC NB
3. create LR in each AZ, connect them to transit switch
4. create one logical switch with a VIF port attached to local OVS &
   connect this logical switch to LR (e.g. 192.168.0.1/24)
5. install in one AZ in LR 2 static routes with a create command (invoke
   next command twice):

   ovn-nbctl --id=@id create logical-router-static-route ip_prefix=1.2.3.4/32 
nexthop=192.168.0.10 -- logical_router add lr1 static_routes @id

From this time there is a couple of strange behaviour/bugs appear:

1. [possible problem] There is a duplicated route in the NB within a
   single LR.  lflow is computed to have ECMP group with two similar
   routes:

   table=11(lr_in_ip_routing   ), priority=97   , match=(reg7 == 0 && ip4.dst 
== 1.2.3.4/32), action=(ip.ttl--; flags.loopback = 1; reg8[0..15] = 1; 
reg8[16..31] = select(1, 2);
   table=12(lr_in_ip_routing_ecmp), priority=100  , match=(reg8[0..15] == 1 && 
reg8[16..31] == 1), action=(reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 
d0:fe:00:00:00:04; outport = "subnet-45661000"; next;)
   table=12(lr_in_ip_routing_ecmp), priority=100  , match=(reg8[0..15] == 2 && 
reg8[16..31] == 1), action=(reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 
d0:fe:00:00:00:04; outport = "subnet-45661000"; next;)

   Maybe, it’s better to have some kind of handling such routes?
   ovsdb index or some logic in ovn-northd?

2. [bug] There is a duplicated route advertisement in
   OVN_IC_Southbound:Route table.  IMO, this should be fixed by adding a
   new index to this table for availability_zone, transit_switch,
   ip_prefix, nexthop and route_table; adding a logic to check if the
   route was already advertised (covered in Patch #7).

3. [bug] There is a constant same route learning.  Each ovn-ic iteration
   on the opposite availability zone adds one new same route.  It creates
   thousands of same routes each second. This bug is covered by Patch #7.

4. [possible problem] After multiple routes are learned to NB on the
   opposite availability zone, ovn-northd generates ecmp lflows.  Same as
   in #1: one in lr_in_ip_routing with select()
   and thousands of same records in lr_in_ip_routing_ecmp.  OVN allows
   installing UINT_MAX routes within ECMP group.

5. [OVS bug?] I'd like someone from OVS team to see on this.
   ovn-controller installed long-long openflow group rule
   (group #3):

   # ovn-appctl -t ovn-controller group-table-list | grep :3 | wc -c
   797824

   When I try to dump groups with ovs-ofctl dump-groups br-int, I get
   next error in console:

   # ovs-ofctl dump-groups br-int
   ovs-ofctl: OpenFlow packet receive failed (End of file)

   In ovs-vswitchd I see next error in logs and after this line ovs is
   restarted:

   2022-11-16T15:21:29.898Z|00145|util|EMER|lib/ofp-msgs.c:995: assertion 
start_ofs <= UINT16_MAX failed in ofpmp_postappend()

   If I issue command again, sometimes it prints same error, but
   sometimes this one (I had on the dev machine another OVN LB, so there
   are excess groups):

   # ovs-ofctl dump-groups br-int
   NXST_GROUP_DESC reply (xid=0x2): flags=[more]
   
group_id=3,type=select,selection_method=dp_hash,bucket=bucket_id:0,weight:100,actions=ct(commit,table=20,zone=NXM_NX_REG13[0..15],nat(dst=...),exec(load:0x1->NXM_NX_CT_LABEL[1]))
   
group_id=1,type=select,selection_method=dp_hash,bucket=bucket_id:0,weight:100,actions=ct(commit,table=20,zone=NXM_NX_REG13[0..15],nat(dst=...),exec(load:0x1->NXM_NX_CT_LABEL[1]))
   2022-11-17T17:53:41Z|1|ofp_group|WARN|OpenFlow message bucket length 56 
exceeds remaining buckets data size 40
   NXST_GROUP_DESC reply (xid=0x2): ***decode error: OFPGMFC_BAD_BUCKET***
     01 11 a9 58 00 00 00 02-ff ff 00 00 00 00 23 20 |...X..# |
   0010  00 00 00 08 00 00 00 00-a9 40 01 00 00 00 00 02 |.@..|
   0020  a9 08 00 00 00 00 00 00-00 38 00 28 00 00 00 00 |.8.(|
   0030  ff ff 00 18 00 00 23 20-00 07 0c 0f 80 01 08 08 |..# |
   0040  00 00 00 00 00 00 00 01-ff ff 00 10 00 00 23 20 |..# |
   0050  00 0e ff f8 14 00 00 00-00 00 00 08 00 64 00 00 |.d..|
   0060  00 38 00 28 00 00 00 01-ff ff 00 18 00 00 23 20 |.8.(..# |
   0070  00 07 0c 0f 80 01 08 08-00 00 00 00 00 00 00 02 ||
   0080  ff ff 00 10 00 00 23 20-00 0e ff f8 14 00 00 00 |..# |
   0090  00 00 00 08 00 64 00 00-00 38 00 28 00 00 00 02 |.d...8.(|