Saturate a ethernet card is easy.

The kernel translation to user mode, packet handling, sending back to
kernel stack, network layer....
tooks a big path and involve a lots of context / cpu execution mode (ring0
- kernel, ring3 - user) switches,
and this affects directly your latency, stability and throughput.

handling 1000 reqs /s is pretty vanilla since 90's ...


On Mon, Mar 29, 2021 at 1:43 PM Michael Goulish <[email protected]> wrote:

> Oh I know that's not very fast, but this is an endurance test, not a
> throughput test. I want to see that latency does not rise over a long
> period run.  If you try for maximum throughput, you mess up latency and
> then you can't see if something is changing slowly over time.
>
> A while ago I used iperf3 for a throughput test for the TCP adapter in
> which we were able to saturate a 40 Gbit/sec interface.    (I was only able
> to do that by using two separate iperf3 sender/receiver pairs, pointing in
> opposite directions.)
>
> I think we were not able to saturate the 40 Gbit link with just one iperf3
> sender/receiver pair because the receiver went to 100% CPU.
>
>
>
>
>
> On Mon, Mar 29, 2021 at 12:11 PM Virgilio Fornazin <
> [email protected]> wrote:
>
> > 1000 req/s is SOOOOOOOOOOoooooooo
> > SSSSSSSSsssssLLLLLLLlllllloooooooWWWWWwwww w w   w    w     w . . .
> >
> > qpidd c++ broker was able to 800.000k msg in / 800.000k msg out on a
> > 12-core xeon e5690 32gb ram , 2x 10gbe lan, rhel 6.x.
> > Test ran was on 2011, current HW should be at least 2 / 3 times better...
> >
> > On Mon, Mar 29, 2021 at 3:59 AM Michael Goulish <[email protected]>
> > wrote:
> >
> > > * The test has now passed 220,000 seconds (2.5 days) with no failure.
> > 1000
> > > requests per second, and a new batch of 100 Hey workers every 60
> seconds.
> > >
> > > * Average response time is not changing. It has been between 1 and 2
> msec
> > > the whole test.
> > >
> > > * Router memory does *not* appear to be growing without bound. It is
> > larger
> > > than it was at the start, but the intervals between little upticks are
> > > becoming longer and longer. Last uptick (of 264K) was 20 hours ago.
> > > (Looking at router on receiving side.)
> > >
> > >
> > >
> > >
> > > Using the Hey load generator against Nginx server, with two routers in
> > the
> > > middle -- either router on its own box, fast link between them.
> > >
> > > Hey is using 100 parallel workers, each doing 10 HTTP requests per
> > second.
> > >
> > > Hey is doing repeated 60-second tests, and reporting statistics on each
> > > one.
> > >
> > > Unfortunately I still cannot run qdstat, although I have both pythons
> > > installed and have done standard builds of both proton and dispatch.
> > Can't
> > > find python module named 'proton'.
> > >
> > > Test is continuing until I need the machines for something else.
> > >
> >
>

Reply via email to