On Sun, Oct 8, 2023 at 8:27 PM Christian Marangi wrote:
>
> On Sun, Oct 08, 2023 at 09:08:41AM +0200, Eric Dumazet wrote:
> > On Fri, Oct 6, 2023 at 8:49 PM Christian Marangi
> > wrote:
> > >
> > > On Thu, Oct 05, 2023 at 06:16:26PM +0200, Eric Dumazet wro
On Fri, Oct 6, 2023 at 8:49 PM Christian Marangi wrote:
>
> On Thu, Oct 05, 2023 at 06:16:26PM +0200, Eric Dumazet wrote:
> > On Tue, Oct 3, 2023 at 8:36 PM Christian Marangi
> > wrote:
> > >
> > > Replace if condition of napi_schedule_prep/__napi_schedule a
On Fri, Oct 6, 2023 at 8:52 PM Christian Marangi wrote:
>
> On Thu, Oct 05, 2023 at 06:41:03PM +0200, Eric Dumazet wrote:
> > On Thu, Oct 5, 2023 at 6:32 PM Jakub Kicinski wrote:
> > >
> > > On Thu, 5 Oct 2023 18:11:56 +0200 Eric Dumazet wrote:
> >
On Thu, Oct 5, 2023 at 6:32 PM Jakub Kicinski wrote:
>
> On Thu, 5 Oct 2023 18:11:56 +0200 Eric Dumazet wrote:
> > OK, but I suspect some users of napi_reschedule() might not be race-free...
>
> What's the race you're thinking of?
This sort of thing... the rac
On Tue, Oct 3, 2023 at 8:36 PM Christian Marangi wrote:
>
> Replace if condition of napi_schedule_prep/__napi_schedule and use bool
> from napi_schedule directly where possible.
>
> Signed-off-by: Christian Marangi
> ---
> drivers/net/ethernet/atheros/atlx/atl1.c | 4 +---
> drivers/net/ethe
Kleine-Budde # for can/dev/rx-offload.c
OK, but I suspect some users of napi_reschedule() might not be race-free...
Reviewed-by: Eric Dumazet
___
ath10k mailing list
ath10k@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/ath10k
On Tue, Oct 3, 2023 at 8:36 PM Christian Marangi wrote:
>
> Replace drivers that still use napi_schedule_prep/__napi_schedule
> with napi_schedule helper as it does the same exact check and call.
>
> Signed-off-by: Christian Marangi
Reviewed-b
has been scheduled.
> been scheduled.
>
> Signed-off-by: Christian Marangi
Yeah, I guess you forgot to mention I suggested this patch ...
Reviewed-by: Eric Dumazet
___
ath10k mailing list
ath10k@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/ath10k
On Mon, 2016-05-16 at 11:14 +0300, Roman Yeryomin wrote:
> So, very close to "as before": 900Mbps UDP, 750 TCP.
> But still, I was expecting performance improvements from latest ath10k
> code, not regressions.
> I know that hw is capable of 800Mbps TCP, which I'm targeting.
One flow can reach 800
On Mon, 2016-05-16 at 01:34 +0300, Roman Yeryomin wrote:
> qdisc fq_codel 8003: parent :3 limit 1024p flows 16 quantum 1514
> target 80.0ms ce_threshold 32us interval 100.0ms ecn
> Sent 1601271168 bytes 1057706 pkt (dropped 1422304, overlimits 0 requeues 17)
> backlog 1541252b 1018p requeues 17
On Fri, 2016-05-06 at 17:25 +0200, moeller0 wrote:
> Hi Eric,
>
> > On May 6, 2016, at 15:25 , Eric Dumazet wrote:
> > Angles of attack :
> >
> > 1) I will provide a per device /sys/class/net/eth0/gro_max_frags so that
> > we can more easily control amou
On Fri, 2016-05-06 at 13:46 +0200, moeller0 wrote:
> Hi Jesper,
>
> > On May 6, 2016, at 13:33 , Jesper Dangaard Brouer
> wrote:
> >
> >
> > On Fri, 6 May 2016 10:41:53 +0200 moeller0 wrote:
> >
> >>Speaking out of total ignorance, I ask why not account
> >> GRO/GSO packets by the number
On Thu, 2016-05-05 at 19:25 +0300, Roman Yeryomin wrote:
> On 5 May 2016 at 19:12, Eric Dumazet wrote:
> > On Thu, 2016-05-05 at 17:53 +0300, Roman Yeryomin wrote:
> >
> >>
> >> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
> >> quan
On Thu, 2016-05-05 at 17:53 +0300, Roman Yeryomin wrote:
>
> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
> quantum 1514 target 5.0ms interval 100.0ms ecn
> Sent 12306 bytes 128 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> maxpacket 0 drop_overlimit
On Tue, 2016-05-03 at 10:37 -0700, Dave Taht wrote:
> Thus far this batch drop patch is testing out beautifully. Under a
> 900Mbit flood going into 100Mbit on the pcengines apu2, cpu usage for
> ksoftirqd now doesn't crack 10%, where before (under
> pie,pfifo,fq_codel,cake & the prior fq_codel) it
On Tue, 2016-05-03 at 12:50 +, Agarwal, Anil wrote:
> I should be more precise about the statement about the inaccuracy of the
> algorithm.
> Given that we dequeue packets in round robin manner, the maxqidx value may,
> on occasions, point to a queue
> which is smaller than the largest queue
On Mon, 2016-05-02 at 10:08 -0700, Dave Taht wrote:
> On Mon, May 2, 2016 at 9:14 AM, Eric Dumazet wrote:
> >
> > I want to check your qdisc configuration, the one that you used and
> > where you had fq_codel performance issues
> >
> > tc -s -d qdisc
>
>
On Mon, 2016-05-02 at 18:43 +0300, Roman Yeryomin wrote:
> On 2 May 2016 at 18:07, Eric Dumazet wrote:
> > On Mon, 2016-05-02 at 17:18 +0300, Roman Yeryomin wrote:
> >
> >> Imagine you are a video operator, have MacBook Pro, gigabit LAN and
> >> NAS on ethernet si
On Mon, 2016-05-02 at 16:47 +0300, Roman Yeryomin wrote:
> So it looks to me that fq_codel is just broken if it needs half of my
> resources.
Agreed.
When I wrote fq_codel, I was not expecting that one UDP socket could
fill fq_codel with packets, since we have standard backpressure.
SO_SNDBUF
On Mon, 2016-05-02 at 17:18 +0300, Roman Yeryomin wrote:
> Imagine you are a video operator, have MacBook Pro, gigabit LAN and
> NAS on ethernet side. You would want to get maximum speed. And
> fq_codel just dropped it down to 550Mbps for TCP (instead of 750Mbps)
> and to 30Mbps for UDP (instead o
On Mon, 2016-05-02 at 17:09 +0300, Roman Yeryomin wrote:
> So if I run some UDP download you will just kill me? Sounds broken.
>
Seriously guys, I never suggesting kill a _task_ but the _flow_
Meaning dropping packets. See ?
If you do not want to drop packets, do not use fq_codel and simply us
On Sun, 2016-05-01 at 11:26 -0700, Dave Taht wrote:
> On Sun, May 1, 2016 at 10:59 AM, Eric Dumazet wrote:
> >
> > Well, just _kill_ the offender, instead of trying to be gentle.
>
> I like it. :) Killing off a malfunctioning program flooding the local
> network int
On Sun, 2016-05-01 at 23:35 +0300, Jonathan Morton wrote:
> > On 1 May, 2016, at 21:46, Eric Dumazet wrote:
> >
> > Optimizing the search function is not possible, unless you slow down the
> > fast path. This was my design choice.
>
> I beg to differ. Cake iterat
On Sun, 2016-05-01 at 11:46 -0700, Eric Dumazet wrote:
> Just drop half backlog packets instead of 1, (maybe add a cap of 64
> packets to avoid too big burts of kfree_skb() which might add cpu
> spikes) and problem is gone.
>
I used following patch and it indeed solved the issue
On Sun, 2016-05-01 at 21:20 +0300, Jonathan Morton wrote:
> > On 1 May, 2016, at 20:59, Eric Dumazet wrote:
> >
> > fq_codel_drop() could drop _all_ packets of the fat flow, instead of a
> > single one.
>
> Unfortunately, that could have bad consequences if the “f
On Sat, 2016-04-30 at 20:41 -0700, Dave Taht wrote:
> >>>
> >>> 45.78% [kernel] [k] fq_codel_drop
> >>> 3.05% [kernel] [k] ag71xx_poll
> >>> 2.18% [kernel] [k] skb_release_data
> >>> 2.01% [kernel] [k] r4k_dma_cache_inv
>
> The udp flood behavior is n
On Sun, 2015-10-04 at 10:05 -0700, Ben Greear wrote:
> I guess I'll just stop using Cubic. Any suggestions for another
> congestion algorithm to use? I'd prefer something that worked well
> in pretty much any network condition, of course, and it has to work with
> ath10k.
>
> We can also run so
27 matches
Mail list logo