re.
>
> Really appreciate your help and thanks a lot.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2023年3月22日周三 11:54写道:
>> Hi Zhang,
>>
>> Awesome! Thanks!
>>
>> Regards,
>> Florin
>>
>>> On Mar 21, 2023, at
l provide feedback later.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2023年3月22日周三 02:12写道:
>> Hi,
>>
>> Okay, resetting of half-opens definitely not supported. I updated the patch
>> to just clean them up on forced reset, without sending a re
Hi,
The problem seems to be that you’re using a vmxnet3 interface, so I suspect
this might be a vm configuration issue. Your current config should work but
could end up being inefficient.
With respect to your problem, I just built redis and ran redis-server and cli
over LDP. Everything
p usage.
>>> 3. register a disconnect callback, which basically do same as reset
>>> callback.
>>> 4. register a cleanup callback and accept callback, which basically make
>>> the session layer happy without actually relevant work to do.
>>>
>>> T
tly warning 'session %u hash delete rv -3' in
> session_delete in my environment, hope this helps to investigate.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2023年3月20日周一 23:29写道:
>> Hi,
>>
>> Understood and yes, connect will synchronously fail
Hi,
First of all, could you try this [1] with latest vpp? It’s really interesting
that iperf does not exhibit this issue.
Regarding your config, some observations:
- I see you have configured 4 worker. I would then recommend to use 4 rx-queues
and 5 tx-queues (main can send packets), as
ake app
> complicated from a TCP programming prospective.
>
> For your patch, I think it should be work because I can't delete the half
> open session immediately because there is worker configured, so the half open
> will be removed from bihash when syn retrans timeout. I have merged the
using fixed ports, you’ll have to wait for the half-open cleanup
notification.
>
> Should I also registered half open callback or there are some other reason
> that lead to this failure?
>
[fc] Yes, see above.
Regards,
Florin
[1] https://gerrit.fd.io/r/c/vpp/+/38526
>
> Flor
uses sockets, so to reproduce we’ll
need to replicate your testbed.
Regards,
Florin
> On Mar 19, 2023, at 2:58 PM, Florin Coras via lists.fd.io
> wrote:
>
> Hi,
>
> That may very well be a problem introduced by the move of connects to first
> worker. Unfortunately,
Hi,
When you abort the connection, is it fully established or half-open? Half-opens
are cleaned up by the owner thread after a timeout, but the 5-tuple should be
assigned to the fully established session by that point.
tcp_half_open_connection_cleanup does not cleanup the bihash instead
Hi,
That may very well be a problem introduced by the move of connects to first
worker. Unfortunately, I we don’t have tests for all of those corner cases yet.
However, to replicate this issue, could you provide a bit more details about
your setup and the exact backtrace? It looks like you’re
Great! Thanks for confirming!
Regards,
Florin
> On Mar 16, 2023, at 8:29 PM, Zhang Dongya wrote:
>
> yes, this is exactly what I want do, this patch works as expected, thanks a
> lot.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2023年3月15日周三 01:22写道:
>
roto = proto;
>> lep->refcnt = 1;
>>
>> transport_endpoint_table_add (>local_endpoints_table, proto, >ep,
>> lep - tm->local_endpoints);
>>
>> return 0;
>> }
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2023年3月14日周二 1
Hi,
Could you try this out [1]? I’ve hit this issue myself today but with udp
sessions. Unfortunately, as you’ve correctly pointed out, we were forcing a
cleanup only on the non-fixed local port branch.
Regards,
Florin
[1] https://gerrit.fd.io/r/c/vpp/+/38473
> On Mar 13, 2023, at 7:35
n
>
> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io>> On Behalf Of Florin Coras
> Sent: Tuesday, March 7, 2023 2:49 PM
> To: vpp-dev mailto:vpp-dev@lists.fd.io>>
> Cc: Olivia Dunham <mailto:theoliviadun...@gmail.com>
Hi Kevin,
That’s a really old version of vpp. TLS has seen several improvements since
then in areas including scheduling after incomplete writes. If you get a chance
to test vpp latest or a more recent release, do let us know if the issue still
persists.
Regards,
Florin
> On Mar 6, 2023,
nginx.conf
Description: Binary data
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22670): https://lists.fd.io/g/vpp-dev/message/22670
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute
Hi Vivek, Aslam,
That’s an interesting use case. We typically recommend using VCL natively and
only if that’s not possible use LDP, which implies VLS locking. We haven’t had
many VLS native integration efforts.
Coming back to your problem, any particular reason why you’re not registering
all
Hi,
You can disable a node using vlib_node_set_state. There’s no api to unregister
a node.
Regards,
Florin
> On Feb 6, 2023, at 12:00 PM, amine belroul wrote:
>
> Hello,
> How can I delete node process from vpp runtime?
> For right I can make it just done but not deleted.
>
>
> Thank
Hi,
Could you provide a simplified description of your topology and a bare bones
nginx config? We could try to repro this in the hs-test infra we’ve been
recently developing. See here [1].
Also, could you also try out this patch [2] I’ve been toying with recently to
see if it improves
Hi,
I’m guessing you’re running out of ports on connections from nginx/vpp to the
actual server, since you’re using fixed ips and a fixed destination port? Check
how many sessions you have opened with “show session”.
Out of curiosity, what are you using mirroring for? Testing?
Regards,
Hi Alexander,
Quick reply.
Nice bug report! Agreed that it looks like vl_api_clnt_process sleeps, probably
because it hit a queue size of 0, but memclnt_queue_callback or the timeout,
albeit 20s is a lot, should wake it up.
So, given that QUEUE_SIGNAL_EVENT is set, the only thing that
Hi Chinmaya,
Given that you’re getting packets in the listener’s rx fifo, I suspect the
request to make it a connected listener didn’t work. We’ve had a number of
changes in vcl/session layer so hard to say what exactly might be affecting
your app.
Just did an iperf udp test on master and
Hi Chinmaya,
Given that data is written to the listener’s fifo, I’ll guess vpp thinks it’s
using non-connected udp sessions. Since you expect accepts to be coming,
probably you’re missing an vppcom_session_attr VPPCOM_ATTR_SET_CONNECTED on the
listener. See for instance here [1]. It could
Hi Chinmaya,
Are you by chance using 23.02rc0, as opposed to 22.10, in combination with
non-connected udp listeners? If yes, could you try this fix [1] or vpp latest
to check if the issue still persists?
Regards,
Florin
[1] https://gerrit.fd.io/r/c/vpp/+/37842
> On Jan 18, 2023, at 12:59
Hi folks,
It’s been many months and patches at this point but once [1] is merged, session
layer will accept connects from both main, with worker barrier, and first
worker. Preference is now for the latter, especially if CPS performance is
critical.
There should be no need to change existing
Hi Federico,
Apologies, I missed your first email.
More Inline.
Regards,
Florin
> On Nov 16, 2022, at 7:53 AM, Federico Strati via lists.fd.io
> wrote:
>
> Hello Ben, All
>
> first of all, thanks for the prompt reply.
>
> Now let me clarify the questions, which confused you because of
Hi Anthony,
Great! Will add the gethostbyname issue to my never shrinking todo list but,
should you look into it, let me know if you manage to figure out how ldp
interacts with gethostbyname.
Regards,
Florin
> On Nov 3, 2022, at 4:54 AM, Anthony Fee wrote:
>
> Hi Florin,
>
> Thanks for
vpp unit test for session feature and no regression found
> yet.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2022年11月1日周二 23:42写道:
> Hi,
>
> Will you be pushing the fix or should I do it?
>
> Regards,
> Florin
>
>> On Oct 25, 2022, at 9:26 AM, Flori
Hi Anthony,
Assuming the host os has network connectivity beyond vpp for dns resolution,
this is surprising. Would be good to understand if anything actually makes its
way into ldp during a gethostbyname() call.
Native integration with vcl, as opposed to ldp, should solve the problem but
Hi Anthony,
LDP doesn’t currently intercept gethostbyname as integration with vpp's
internal dns resolver is not yet done for vcl. Should you or anybody else be
interested in implementing that, I’d be happy to offer support.
Regards,
Florin
> On Oct 28, 2022, at 2:29 AM, Anthony Fee wrote:
ex is 64, if we do not
> increase by 1, it will only make one 64B vec for the bitmap, which may not
> hold the session index.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2022年10月25日周二 01:14写道:
> Hi,
>
> Could you replace s->session_index by s->sess
Hi,
Could you replace s->session_index by s->session_index ? : 1 in the patch?
Regards,
Florin
> On Oct 24, 2022, at 12:23 AM, Zhang Dongya wrote:
>
> Hi list,
>
> Recently I am testing my TCP application in a plugin, what I did is to
> initiate a TCP client in my plugin, however, when I
y account have been registered in
> LF 5 years, however, when I login the gerrit web ui, it still reports
> Forbidden error, my account username is ZhangDongya.
>
> Ok, I will try to use git command line to give a try.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
error, do you know where I can get help to
> solve this ? or gerrit need some approval for get involved?
>
> It's ok if you want to get it fixed asap.
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2022年10月12日周三 23:44写道:
> Hi,
>
> It looks like a bug. We
Hi,
It looks like a bug. We should make sure the fifo exists, which is typically
the case unless transport is stuck in half-open. Note that tcp does timeout and
cleanups those stuck half-open sessions, but we should allow the app to cleanup
as well.
Let me know if you plan to push a patch
e socket is set to the non blocking mode, the CPU
> utilization rate is always 100%. The resource consumption is very high when
> multithreading. I can only reduce the CPU utilization rate by using methods
> such as select.
>
> Why does this mode fail in blocking mode? Do you hav
gt;
> Vratko.
>
> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io>> On Behalf Of Florin Coras
> Sent: Monday, 2022-September-12 23:11
> To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] r
Hi Vratko,
Do we really need to block the binary api waiting for a reply from another vpp
process just to set a mac address?
If setting up the mac (or similar) cannot be done synchronously, probably api
handlers should hand over all those requests to another vpp process,
Hi,
Just tested on master and this seems to work fine after a few app fixes. Having
said that, I would recommend you build vcl native apps and register multiple
workers for better performance.
Comments:
- make sure sockets are not blocking otherwise there’s a good change that only
one of the
Hi wanghe,
Hard to say exactly what’s happening there but, at a high level, vls tries to
clone a vcl session from one worker to another and the rpc does not return
after 3s.
Are you running this at scale or does this happen with few sessions?
Also, make sure that a session does not
Hi,
Probably libevent statically links libc. If possible, try to recompile libevent
and make it dynamic.
Regards,
Florin
> On Aug 20, 2022, at 8:06 PM, weizhen9...@163.com wrote:
>
> Hi,
> Now I implement mysql proxy by using libevent.so. And I want to implement
> mysql proxy by vpp
Hi Anthony,
Sounds great! Let me know what you find.
As for wrk, maybe give it a try. Typically, we’ve focused ldp testing on server
side, e.g., nginx/envoy, since that was what folks were interested in. We
should probably take a closer look at client side as well.
Regards,
Florin
> On
Hi Anthony,
I for one have not tried netperf, so not sure why it’s not working. Assuming it
is directly using epoll/select, i.e., no libevent/ev it should work unless it
uses a weird threading model. So, it might be worth investigating why it’s
crashing.
Do you need both server and client
Hi Vijay,
That looks like an accept that either 1) can’t be propagated over shared memory
message queue to app, because mq is congested or is 2) rejected by a builtin
app
Regards,
Florin
> On Jul 26, 2022, at 7:13 PM, Vijay Kumar wrote:
>
> Hi experts,
>
> We are seeing the below
Hi Krisztián,
The first option installs vpp’s debian packages, it will not install vpp in
your home folder. Once you’re done installing the debs, check if vpp is running
with something like ps, e.g., ps -ef | grep vpp. Make sure to install
vpp-plugin-core, to get the plugins.
Regarding the
Hi Wanghe,
The only api bindings supported today are c, python and golang. Maybe somebody
familiar with the jvpp code can help you out but otherwise I’d recommend to
switch if possible.
Regards,
Florin
> On Jun 3, 2022, at 7:55 AM, NUAA无痕 wrote:
>
> Hi, florin
>
> About this question, i
can you analyze this problem? thank you very much!
>
> Best regards
> wanghe
>
>
>
>
> Florin Coras mailto:fcoras.li...@gmail.com>>
> 于2022年5月28日周六 01:02写道:
> Hi wanghe,
>
> Unfortunately, jvpp is no longer supported so probably there’s no recent fix
> for t
Hi wanghe,
Unfortunately, jvpp is no longer supported so probably there’s no recent fix
for the issue you’re hitting. By the looks of it, an api msg handler is trying
to enqueue something (probably a reply towards the client) and ends up stuck
because the svm queue is full and a condvar
Hi,
Inline.
> On May 25, 2022, at 2:15 AM, NUAA无痕 wrote:
>
> hi, Florin Coras
> I may not have described clearly
>
> im use vpp version 2101
Could you also try with latest vpp? We’re about to release 22.06
>
> 1.sendto function problem
> i use LDP for c socke
://git.fd.io/vpp/tree/src/vcl/vppcom.c#n2883
> On May 25, 2022, at 1:43 AM, NUAA无痕 wrote:
>
>
>
> -- Forwarded message -
> 发件人: NUAA无痕 mailto:nuaawan...@gmail.com>>
> Date: 2022年5月25日周三 16:37
> Subject: Re: [vpp-dev] hoststack-java netty segmentfault
>
Hi,
> On May 20, 2022, at 2:31 AM, NUAA无痕 wrote:
>
> hi,vpp expert
> now im use vpp hoststack for udp, i meet some problems
>
> 1.udp socket must use connect function, if user sendto will cause ip address
> not connect error
What version of vpp are you using? Although we prefer connected
Hi,
Are you trying to use LDP + java? I suspect that has never been tested and I’d
be surprised if it worked.
Regards,
Florin
> On May 20, 2022, at 2:18 AM, NUAA无痕 wrote:
>
> hi, vpp expert
> im use vpp hoststack for java netty
> but it segmentfault, reason is epoll use svm_fifo_t is null
Hi Yacan,
Currently rpcs from first worker to main are done through session layer and are
processed by main in batches. Session queue node runs on main in interrupt mode
so first worker will set an interrupt when the list of pending connects goes
non-empty and main will switch to polling in
Next step then. What’s segment-size and add-segment-size in vcl.conf? Could you
set them to something large like 40? Also event-queue-size 100,
just to make sure mq and fifo segments are not a limiting factor. In vpp under
session stanza, set event-queue-length 20.
Try also to
As mentioned previously, is the upstream server handling the load? Do you see
drops between vpp and the upstream server?
Regards,
Florin
> On May 4, 2022, at 9:10 AM, weizhen9...@163.com wrote:
>
> Hi,
>According to your suggestion, I config the src-address.
>
>
> But the performance is
What’s the result prior to multiple addresses? Also, can you give it the whole
/24? No need to configure the ips, just tcp src-address
192.168.6.6-192.168.6.250
Forgot to ask before but is the server that’s being proxied for handling the
load? It will also need to accept a lot of connections.
Hi,
That shouldn’t be the issue. Half-opens are on main because connection
establishment needs locks before it sends out a syn packet. Handshakes are not
completed on main but on workers. VPP with one worker + main should be able to
handle 100k+ CPS with warmed up pools.
Long term we’ll
Hi,
Those are half-open connects. So yes, they’re expected if nginx opens a new
connection for each request.
Regards,
Florin
> On May 4, 2022, at 6:48 AM, weizhen9...@163.com wrote:
>
> Hi,
> When I use wrk to test the performance of nginx proxy using vpp host
> stack, I execute the
Hi,
Unfortunately, I have not, partly because I didn’t expect too much out of the
test due to the issues you’re hitting. What’s the difference between linux and
vpp with and without tcp_max_tw_bucket?
Regards,
Florin
> On May 3, 2022, at 3:28 AM, weizhen9...@163.com wrote:
>
> Hi,
>
>
It seems vpp is sending and receiving fins and resets. So if the remote end did
not send fins, probably the resets are the source of the epoll HUPs. If you
want to debug why those resets were sent, you might have to capture a pcap
trace or try to capture them while they are sent. “show tcp
Hi,
That indeed looks like an issue due to vpp not being able to recycle
connections fast enough. There are only 64k connections available between vpp
and the upstream server, so recycling them as fast as possible, i.e., with 0
timeout as the kernel does after tcp_max_tw_buckets threshold is
Hi,
As per this [1], after tcp_max_tw_buckets threshold is hit timewait time is 0
and this [2] explains what will go wrong. Assuming you’re hitting the
threshold, 1s timewait-time in vpp will probably not be enough to match
performance.
Not sure what you mean by “short link”. If you can’t
Hi,
Understood. See the comments in my previous reply regarding timewait-time
(tcp_max_tw_bucket practically sets time-wait to 0 once threshold is passed)
and tcp-src address.
Regards,
Florin
> On Apr 30, 2022, at 10:08 AM, weizhen9...@163.com wrote:
>
> Hi,
> I test nginx proxy using
Hi,
What is performance in this case, CPS? If yes, does nginx proxy only towards
one IP, hence the need for tcp_max_tw_bucket?
You have the option to reduce time wait time in tcp by setting timewait-time in
tcp’s startup.conf stanza. I would not recommend reducing it too much as it can
lead
Hi,
I wouldn’t recommend it, but if you must change it, in vpp’s startup.conf add a
tcp stanza with cc-algo set to newreno: tcp { cc-algo newreno }
Regards,
Florin
> On Apr 26, 2022, at 10:30 PM, 25956760...@gmail.com wrote:
>
> Hi All,
> I have read the src code of TCP. I found that if I
Hi,
I’m guessing you’re asking how to run LDP from gdb. For that either create a
gdb script file and: gdb -x cmd.gdb --args ./epollandsocket, or run the
commands after starting gdb.
The minimal set of commands to start:
set exec-wrapper env
Hi,
Are you trying to run the test app through LDP? If yes, sharing one epoll fd
between two threads might be the source of the issues, although hard to say
what could go wrong.
With respect to your app, could you run it from gdb and check where it’s stuck?
Regards,
Florin
> On Apr 25,
gt; main thread was locking the binary api’s queue mutex, and then it scheduled
> to execute another process node, in this process node it called barrier sync.
> Is this a possible scenario?
>
> BRs,
> Kevin
>
> From: Florin Coras mailto:fcoras.li...@gmail.com>>
> Sen
Hi Kevin,
That’s a pretty old VPP release so you should maybe try to update.
Regarding the deadlock, what is main actually doing? If it didn’t lock the
binary api's queue mutex before the barrier sync, it shouldn’t deadlock.
Regards,
Florin
> On Apr 7, 2022, at 6:39 PM, Kevin Yan via
Hi Kunal,
Similar goal but it’s only been tested against a limited number of
applications.
Also, as per my previous reply, ldp accepts a mix of linux and vcl fd/sessions
and to that end it sets aside a number of fds for linux. Consequently, vcl fds
will start from 1 << LDP_ENV_SID_BIT and
I’ve never tried netperf so unfortunately I don’t even know if it works. From
the server logs, it looks like it hit some sort of error on accept.
Note that we set aside the first 1 << LDP_ENV_SID_BIT (env variable) fds for
linux. By default that value is 5, which is the min we accept, i.e., 32
Hi Kunal,
How are you attaching netperf to VCL/VPP? Unless you modify it to use VCL your
only option is to try to use LD_PRELOAD (see iperf exaple here [1]).
Note however that most probably LDP does not support all socket options netperf
might want.
Regards,
Florin
[1]
Hi Pim,
Definitely cool! Haven’t had a chance to go through all of it but the fact that
some binary api calls crash vpp is something we should fix.
It feels like vppcfg could also be used for extensive vpp api/cli/cfg testing.
My quick 0.02$
Regards,
Florin
> On Apr 2, 2022, at 8:17 AM,
As mentioned previously, that is not supported for connects currently.
Regards,
Florin
> On Mar 31, 2022, at 9:44 PM, weizhen9...@163.com wrote:
>
> Hi,
> I describe our scene in detail. We use nginx in vpp host stack as a proxy.
> And we add some features in nginx. For example, nginx close
Hi,
Given that 20k connections are being actively opened, those on main thread, and
40k are established, those on the workers, suggests that tcp runs out of ports
for connects. If possible either increase the number of destination IPs for
nginx or try “tcp src-address ip1-ip2” and pass in a
TCP accepts connections in time-wait (see here [1]) but won’t reuse ports of
connection in time-wait for connects. If you expect lots of active opens from
nginx to only one destination ip and you have multiple source ips, you could
try to use "tcp src-address” cli.
Regards,
Florin
[1]
I didn’t mean you should switch to envoy, just that throughput is pretty low
probably because of some configuration. What that configuration is is not
obvious unfortunately.
Regarding the kernel parameters, we have time wait reuse enabled (equivalent to
tcp_tw_reuse) but that should not
Hi,
Could you provide a bit more info about the numbers you are seeing and the type
of test you are doing? Also some details about your configs and vpp version?
As for tcp_tw_recycle, I believe that was removed in Linux 4.12 because it was
causing issues for nat-ed connections. Did you mean
Hi Kunal,
Yes, it might be worth looking into ip and tcp csum offloads.
But given that mtu ~ 9kB, maybe look into forcing tcp to build jumbo frames,
i.e., tcp { mtu 9000 } in startup.conf. It’ll be needed on both ends and I’m
assuming here network between your two vpp instances supports 9k
Hi Kunal,
No problem. Actually, another thing to consider might be mtu. If the interfaces
are configured with mtu > 1.5kB and the network accepts jumbo frames, maybe try
tcp {mtu }
Regards,
Florin
> On Mar 29, 2022, at 12:24 PM, Kunal Parikh wrote:
>
> Many thanks for looking into this
Yup, similar symptoms.
So beyond trying to figure out why checksum offloading is not working and
trying to combine that with gso, i.e., tcp { tso } in startup.conf, not sure
what else could be done.
If you decide to try debugging checksum offloading, try adding
enable-tcp-udp-checksum to
Actually this time the client worker loops/s has dropped to 7k. So that worker
seems to be struggling, probably because of the interface tx cost.
Not sure how that could be solved as it looks like an ena + dpdk tx issue. Out
of curiosity, if you try to limit iperf client bw by doing something
Hi Kunal,
I remember Shankar needed tcp { no-csum-offload } in startup.conf but I see you
disabled tx-checksum-offload for dpdk. So could you try disabling it from tcp?
The fact that csum offloading is not working is probably going to somewhat
affect throughput but I wouldn’t expect it to be
Hi Kunal,
First of all, that’s a lot of workers. For this test, could you just reduce the
number to 1? All of them, including worker 0 are spinning empty on both server
and client, i.e., loop/s > 1M.
Beyond that, the only thing I’m noticing is that the client is very bursty,
i.e., sends up
Hi Kunal,
Unfortunately, the screenshots are unreadable for me.
But if the throughput did not improve, maybe try:
clear run
show run
And check loop/s and vector/dispatch. And a
show session verbose 2
And let’s see what the connection reports in terms of errors, cwnd and so on.
Regards,
Hi,
It does not. For such scenarios it’s probably better to use something like
memif.
Regards,
Florin
> On Mar 24, 2022, at 2:42 AM, 25956760...@gmail.com wrote:
>
> Dose anybody know if VCL supports raw_socket?Thanks.
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent
Hi Shankar,
That’s a pretty old release. Could you try something newer, like 22.02?
Nonetheless, you’ll probably need to try some of those optimizations.
Regards,
Florin
> On Mar 23, 2022, at 11:47 AM, Shankar Raju wrote:
>
> Hi Florin,
> I'm using VPP Version: 20.09-release. These were
Hi Shankar,
What is the result and what is the difference? Also, I might’ve missed it but
what was the vpp version in these tests?
Regarding optimizations:
- show hardware: will tell you the numa for your nic (if you have multiple
numas) and the rx/tx descriptor ring sizes. Typically for tcp
Hi Shankar,
In startup.conf under tcp stanza, add no-csum-offload.
Regards,
Florin
> On Mar 23, 2022, at 6:59 AM, Shankar Raju wrote:
>
> Hi Florin,
>
> I'm running this experiment on AWS and its using ENA NICs. I ran vppctl show
> error command and I did see errors because of bad
Hi Shankar,
What vpp version is this? For optimizations, could you take a look at a recent
version of [1]?
Having said that, let’s try debugging this in small steps. First, I’d recommend
not exporting LD_PRELOAD instead doing something like:
sudo sh -c “LD_PRELOAD= VCL_CONFIG= iperf3 -4 -s”
Only cubic and newreno.
Regards,
Florin
> On Mar 21, 2022, at 9:15 PM, 25956760...@gmail.com wrote:
>
> Thank you for your answer, so is now only Cubic supported?
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21090):
Hi,
TCP BBR is on our todo list but it’s not currently supported.
Regards,
Florin
> On Mar 21, 2022, at 10:40 AM, 25956760...@gmail.com wrote:
>
> Dose anyone konw that if vpp-hoststack supports tcp BBR congestion
> algorithm.I need to use it, thanks
>
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
/+/35654>
>
>
> Thanks.
>
>
> On Thu, Mar 17, 2022 at 12:07 AM Florin Coras <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Vijay,
>
> Yes, APP_OPTIONS_ADD_SEGMENT_SIZE will be needed if any listeners are used.
> It was there before but was not needed
gt;
> On Wed, Mar 16, 2022 at 12:00 PM Florin Coras <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Matt,
>
> Just tried running after a make build and no such message in show log for me.
> Did you try a make wipe?
>
> Regards,
> Florin
>
> > On Mar 1
AIRS] =
> pm->prealloc_fifos ? pm->prealloc_fifos : 0;
>
> a->options[APP_OPTIONS_FLAGS] = APP_OPTIONS_FLAGS_IS_BUILTIN;
>
> if (vnet_application_attach (a))
> {
> NAS_DBG("Failed to attach ");
> return -1;
> }
>
Hi Matt,
Just tried running after a make build and no such message in show log for me.
Did you try a make wipe?
Regards,
Florin
> On Mar 16, 2022, at 9:50 AM, Matthew Smith via lists.fd.io
> wrote:
>
> Hi,
>
> I have been testing against a build from yesterday's master branch (commit id
ror
>
>
> On Wed, Mar 16, 2022 at 1:31 PM Vijay Kumar <mailto:vjkumar2...@gmail.com>> wrote:
> Hi Florin,
>
> Thanks for the clarification about the TCP changes b/w the 2 releases
>
> I will use your patch, hopefully I will catch the issue about where the dr
the connections
hit and the reporting of errors is done through node and session counters (see
show session verbose 2). Obviously that didn’t work properly for the listen
node.
Regards,
Florin
>
>
> On Wed, Mar 16, 2022 at 10:22 AM Florin Coras <mailto:fcoras.li...@gmail.com>&g
esp4-decrypt-tun ESP pkts
> received error
> 5 ipsec4-tun-input good packets
> receivederror
> 1 ip4-input ip4
1 - 100 of 719 matches
Mail list logo