[vpp-dev] vnet_rename_interface()
Hi all, Source file src/vnet/interface.c has a function vnet_rename_interface(). It only appears to be called by the lisp plugin currently. It would be handy to be able to rename a DPDK interface without having to change startup.conf and restart VPP. I am wondering if I could do that by adding a sw_interface_rename API and calling vnet_rename_interface() in the handler function. Before I spend much time working on that, I want to find out if there are any known issues which would prevent that from working or if anyone has any objections to doing it. Thanks! -Matt -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19329): https://lists.fd.io/g/vpp-dev/message/19329 Mute This Topic: https://lists.fd.io/mt/82588728/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Naginator enabled to rebuild jobs with git fetch errors
As suspected, here is a case[3] where the timing was unlucky, but in this case the same job voted twice so it was not a problem. VPP Committers, please carefully review patches that have "Verfied -1" followed by "Verified +1" without a patch upload or recheck. This a case where the "Verified -1" was due to an "Error cloning remote repo" failure with a job status of "NOTBUILT" and all other jobs passed. The retry of that job changed the vote to "Verified +1". For this patch there was no problem because it was the same job voting twice, but it could have been a different job that failed and was overridden. I will remove naginator as soon as the connection reset issue has been resolved. Thanks, -daw- [3] https://gerrit.fd.io/r/c/vpp/+/32167 On 5/4/2021 2:12 PM, Dave Wallace via lists.fd.io wrote: Here is a case where the process worked as desired. The job which failed [0] was retried [1] after 5 seconds and passed upon retry. It did not disrupt the voting for the patch [2] :) Hopefully this will always be the case. The job failure did not show up in the gerrit log which I think is different from past behavior. However, based on previous naginator induced voting irregularities, I suspect that it may cause voting anomalies if the timing is unlucky. I will continue to monitor for connection resets and voting anomalies, but so far so good. Thanks, -daw- [0] https://jenkins.fd.io/job/vpp-verify-master-centos8-x86_64/3335/ [1] https://jenkins.fd.io/job/vpp-verify-master-centos8-x86_64/3336/ [2] https://gerrit.fd.io/r/c/vpp/+/32206 On 5/4/2021 1:26 PM, Dave Wallace via lists.fd.io wrote: Folks, As a temporary measure to help alleviate the burden of rechecking gerrit changes and wasting cycles re-running jobs which have already passed, I have deployed the Naginator Jenkins plugin configuration to retry VPP jobs which fail with the error signature "Error cloning remote repo" [0]. You may recall there is a potential for jobs which are restarted after a failed job to override a -1 vote by a job which failed prior to the restart. Please look for this when reviewing the status of gerrit changes prior to merge as this could break the CI pipeline. Vexxhost is monitoring the network segments using tcpdump to determine what device is causing the TCP connection resets and once the issue is resolved I will remove the Naginator configuration from the VPP job configuration. I will continue to closely monitoring job status for connection resets and also look for any Naginator induced issues with job retries. Thanks, -daw- [0] https://gerrit.fd.io/r/c/ci-management/+/32197 -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19328): https://lists.fd.io/g/vpp-dev/message/19328 Mute This Topic: https://lists.fd.io/mt/82583361/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Naginator enabled to rebuild jobs with git fetch errors
Here is a case where the process worked as desired. The job which failed [0] was retried [1] after 5 seconds and passed upon retry. It did not disrupt the voting for the patch [2] :) Hopefully this will always be the case. The job failure did not show up in the gerrit log which I think is different from past behavior. However, based on previous naginator induced voting irregularities, I suspect that it may cause voting anomalies if the timing is unlucky. I will continue to monitor for connection resets and voting anomalies, but so far so good. Thanks, -daw- [0] https://jenkins.fd.io/job/vpp-verify-master-centos8-x86_64/3335/ [1] https://jenkins.fd.io/job/vpp-verify-master-centos8-x86_64/3336/ [2] https://gerrit.fd.io/r/c/vpp/+/32206 On 5/4/2021 1:26 PM, Dave Wallace via lists.fd.io wrote: Folks, As a temporary measure to help alleviate the burden of rechecking gerrit changes and wasting cycles re-running jobs which have already passed, I have deployed the Naginator Jenkins plugin configuration to retry VPP jobs which fail with the error signature "Error cloning remote repo" [0]. You may recall there is a potential for jobs which are restarted after a failed job to override a -1 vote by a job which failed prior to the restart. Please look for this when reviewing the status of gerrit changes prior to merge as this could break the CI pipeline. Vexxhost is monitoring the network segments using tcpdump to determine what device is causing the TCP connection resets and once the issue is resolved I will remove the Naginator configuration from the VPP job configuration. I will continue to closely monitoring job status for connection resets and also look for any Naginator induced issues with job retries. Thanks, -daw- [0] https://gerrit.fd.io/r/c/ci-management/+/32197 -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19327): https://lists.fd.io/g/vpp-dev/message/19327 Mute This Topic: https://lists.fd.io/mt/82583361/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Naginator enabled to rebuild jobs with git fetch errors
Folks, As a temporary measure to help alleviate the burden of rechecking gerrit changes and wasting cycles re-running jobs which have already passed, I have deployed the Naginator Jenkins plugin configuration to retry VPP jobs which fail with the error signature "Error cloning remote repo" [0]. You may recall there is a potential for jobs which are restarted after a failed job to override a -1 vote by a job which failed prior to the restart. Please look for this when reviewing the status of gerrit changes prior to merge as this could break the CI pipeline. Vexxhost is monitoring the network segments using tcpdump to determine what device is causing the TCP connection resets and once the issue is resolved I will remove the Naginator configuration from the VPP job configuration. I will continue to closely monitoring job status for connection resets and also look for any Naginator induced issues with job retries. Thanks, -daw- [0] https://gerrit.fd.io/r/c/ci-management/+/32197 -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19326): https://lists.fd.io/g/vpp-dev/message/19326 Mute This Topic: https://lists.fd.io/mt/82583361/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] sw_interface_dump currently cann't dump interface which sw_if_index == 0 #vapi
> I would argue that this case can be classified as "bugfix" Yes, [0] was an ugly workaround, we can call it a bug. But the important thing is there are users depending on the buggy behavior. Also, the "buggy" behavior was explained in message documentation, so if we "fix" the bug, we will need to make incompatible edit in the documentation. Here is the longer version of the story. Once upon a time, VPP had no strict process for handling API changes, and sw_interface_dump message had no sw_if_index param. Paul added [4] that param with the usual ~0 standing for "all". But one of "downstream" users was VAT, and Paul had not added anything related to the new param before message is sent [5]. In those times, we had no support for non-zero default values, so VPP saw zero as the value for sw_if_index and returned just details for the local interface, no other. CSIT was using the VAT call, so suddenly all tests started failing due to missing details for other interfaces. There was a discussion on vpp-dev, in 4 threads: [6] [7] [8] [9]. I was not able to figure out a quick enough fix for VAT, but my workaround for API has been merged. Of course, after [2] nobody recalled that the workaround is no longer needed. > the reason for it is minimize the amount of forced work downstream, > caused by the API changes, and minimize the element of surprise. Ok, so there are actually three solutions (in my decreasing preference). One is to add sw_interface_dump_v2 without deprecating sw_interface_dump. That way, old users are not surprised, and no work is required, but the new users may be confused on why there are two very similar messages. (it counts as kind of "read and think to realize no real coding is needed" work). An example for multiple similar messages is how we added create_loopback_instance without removing create_loopback. The second solution is to add _v2 but also deprecate the old message, to keep the API simpler (the release after next) at the cost of bothering those old users who relied on the "buggy" behavior. The third solution is to keep the message name, but change description and behavior (and bump semver), and rely on old users (starting with VAT) to update their usage (after recovering from the initial surprise). > and giving the folks a few weeks (namely, until after RC1), to adapt. That would be fine for people consuming master HEAD regularly. I imagine there are people who upgrade VPP once a release (relying on API stability) without reading vpp-dev mailing list much. Those are going to be surprised by the third solution. Vratko. [4] https://gerrit.fd.io/r/c/vpp/+/18693 [5] https://github.com/FDio/vpp/blob/6407ba56a392f37322001d0ffdca002223b095c0/src/vat/api_format.c#L5978 [6] https://lists.fd.io/g/vpp-dev/topic/30423722#12521 [7] https://lists.fd.io/g/vpp-dev/topic/30426855#12548 [8] https://lists.fd.io/g/vpp-dev/topic/31234917#12817 [9] https://lists.fd.io/g/vpp-dev/topic/31307751#12840 -Original Message- From: Andrew Yourtchenko Sent: Monday, 2021-May-03 11:41 To: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) Cc: jiangxiaom...@outlook.com; vpp-dev@lists.fd.io Subject: Re: [vpp-dev] sw_interface_dump currently cann't dump interface which sw_if_index == 0 #vapi I would argue that this case can be classified as "bugfix" - there was no good reason to use the 0 as a wildcard value in the first place, since it is a valid sw_if_index, and there is a perfectly good "wildcard value" of ~0 that already works, right ? So I would say this discussion should serve as the announcement (Or there can be another separate thread with the explicit subject of it), and post-branch of stable/2106, we can apply the fix to consider the 0 to be a valid interface ID. And of course bump the semver for those who look at it. It's always useful to keep in mind "why" the process is in place, when evaluating how to apply it: the reason for it is minimize the amount of forced work downstream, caused by the API changes, and minimize the element of surprise. In this case, having the "buggy" clients use ~0 in place of 0 (if they ever used that) is strictly less work than having *all* clients to switch to using a new message name even if they used the ~0 to begin with. We can take care of minimizing the "element of surprise" by this discussion, or maybe a separate mail - and giving the folks a few weeks (namely, until after RC1), to adapt. This way the spirit of why the process there in the first place will be fulfilled, without incurring the unnecessary effort for everyone. Does this make sense ? --a On 5/3/21, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io wrote: >> Is there any plan for support selecting only index==0 ? > > Good news first. > I added the TODO here [0], but since then > CSIT stopped using the VAT command in [1], > and other uses based on PAPI should be ready since [2]. > > The bad news is VPP now has more strict process [3] > regarding
Re: [vpp-dev] vcl_test_client is failing with Unsupported application config (-108) #vppcom
Hi, Is there anything configured on vpp side for session layer? Is this vpp 21.06rc0 or something older? The error number seems to suggest an older release. One option would be to just comment out use-mq-eventfd and see if that fixes the issue. Message queue eventfds should work with the binary api, but the rest of the configs on vpp and vcl side must be compatible with it. Regards, Florin > On May 4, 2021, at 4:23 AM, sastry.si...@gmail.com wrote: > > Hi, > I am trying to use vcl_test_client and using below vcl config: > > While trying to run seeing the below error: > > vppcom_connect_to_vpp:502: vcl<1876:0>: app (vcl_test_client) is connected to > VPP! > vppcom_app_create:1203: vcl<1876:0>: sending session enable > vppcom_app_create:1211: vcl<1876:0>: sending app attach > vl_api_app_attach_reply_t_handler:82: vcl<0:-1>: ERROR attach failed: > Unsupported application config (-108) > > Could you please let me know why this is unsupported at VPP? > > vcl { > rx-fifo-size 40 > tx-fifo-size 40 > app-scope-global > api-socket-name /run/vpp/api.sock > #api-socket-name /run/vpp/cli.sock > use-mq-eventfd > } > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19324): https://lists.fd.io/g/vpp-dev/message/19324 Mute This Topic: https://lists.fd.io/mt/82575076/21656 Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] vcl_test_client is failing with Unsupported application config (-108) #vppcom
Hi, I am trying to use vcl_test_client and using below vcl config: While trying to run seeing the below error: vppcom_connect_to_vpp:502: vcl<1876:0>: app (vcl_test_client) is connected to VPP! vppcom_app_create:1203: vcl<1876:0>: sending session enable vppcom_app_create:1211: vcl<1876:0>: sending app attach vl_api_app_attach_reply_t_handler:82: vcl<0:-1>: ERROR attach failed: Unsupported application config (-108) Could you please let me know why this is unsupported at VPP? vcl { rx-fifo-size 40 tx-fifo-size 40 app-scope-global api-socket-name /run/vpp/api.sock #api-socket-name /run/vpp/cli.sock use-mq-eventfd } -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19323): https://lists.fd.io/g/vpp-dev/message/19323 Mute This Topic: https://lists.fd.io/mt/82575076/21656 Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-