On Fri, Sep 23, 2016 at 06:51:04PM +0200, Jesper Dangaard Brouer wrote:
> This is your git tree, right:
> https://git.kernel.org/cgit/linux/kernel/git/peterz/queue.git/
>
> Doesn't look like you pushed it yet, or do I need to look at a specific
> branch?
I mainly work from a local quilt queue w
On Fri, 23 Sep 2016 13:53:33 +0200
Peter Zijlstra wrote:
> On Fri, Sep 23, 2016 at 01:35:59PM +0200, Daniel Borkmann wrote:
> > On 09/02/2016 08:39 AM, David Miller wrote:
> > >
> > >I'm just kind of assuming this won't go through my tree, but I can take
> > >it if that's what everyone agrees t
On Fri, Sep 23, 2016 at 01:35:59PM +0200, Daniel Borkmann wrote:
> On 09/02/2016 08:39 AM, David Miller wrote:
> >
> >I'm just kind of assuming this won't go through my tree, but I can take
> >it if that's what everyone agrees to.
>
> Was this actually picked up somewhere in the mean time?
I can
On 09/02/2016 08:39 AM, David Miller wrote:
From: Eric Dumazet
Date: Wed, 31 Aug 2016 10:42:29 -0700
From: Eric Dumazet
A while back, Paolo and Hannes sent an RFC patch adding threaded-able
napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/)
The problem seems to be that so
On Thu, 1 Sep 2016 17:28:02 +0200
Peter Zijlstra wrote:
> On Thu, Sep 01, 2016 at 03:30:42PM +0200, Jesper Dangaard Brouer wrote:
> > Still... enabled!
> > Hmmm.. more idea how to disable this???
>
> I think you ought to be able to assign yourself to the root cgroup,
> something like:
>
> e
From: Eric Dumazet
Date: Wed, 31 Aug 2016 10:42:29 -0700
> From: Eric Dumazet
>
> A while back, Paolo and Hannes sent an RFC patch adding threaded-able
> napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/)
>
> The problem seems to be that softirqs are very aggressive and are
On Thu, Sep 01, 2016 at 03:30:42PM +0200, Jesper Dangaard Brouer wrote:
> Still... enabled!
> Hmmm.. more idea how to disable this???
I think you ought to be able to assign yourself to the root cgroup,
something like:
echo $$ > /cgroup/tasks
or wheverever the cpu-cgroup controller is mounted a
On 01.09.2016 14:57, Eric Dumazet wrote:
> On Thu, 2016-09-01 at 14:38 +0200, Jesper Dangaard Brouer wrote:
>
>> Correction, on the server-under-test, I'm actually running RHEL7.2
>>
>>
>>> How do I verify/check if I have enabled a cpu-cgroup?
>>
>> Hannes says I can look in "/proc/self/cgroup"
>>
On Thu, 1 Sep 2016 14:48:39 +0200
Peter Zijlstra wrote:
> On Thu, Sep 01, 2016 at 02:38:59PM +0200, Jesper Dangaard Brouer wrote:
> > On Thu, 1 Sep 2016 14:29:25 +0200
> > Jesper Dangaard Brouer wrote:
> >
> > > On Thu, 1 Sep 2016 13:53:56 +0200
> > > Peter Zijlstra wrote:
> > >
> > > > O
On Thu, 2016-09-01 at 15:00 +0200, Hannes Frederic Sowa wrote:
> On 01.09.2016 14:57, Eric Dumazet wrote:
> > On Thu, 2016-09-01 at 14:38 +0200, Jesper Dangaard Brouer wrote:
> >
> >> Correction, on the server-under-test, I'm actually running RHEL7.2
> >>
> >>
> >>> How do I verify/check if I have
On Thu, 2016-09-01 at 12:38 +0200, Jesper Dangaard Brouer wrote:
> I see max queue of 47MBytes, and worse an average standing queue of
> 25Mbytes, which is really bad for the latency seen by the
> application. And having this much outstanding memory is also bad for
> CPU cache size effects, and st
On Thu, 2016-09-01 at 14:38 +0200, Jesper Dangaard Brouer wrote:
> Correction, on the server-under-test, I'm actually running RHEL7.2
>
>
> > How do I verify/check if I have enabled a cpu-cgroup?
>
> Hannes says I can look in "/proc/self/cgroup"
>
> $ cat /proc/self/cgroup
> 7:net_cls:/
> 6
On Thu, 2016-09-01 at 14:05 +0200, Hannes Frederic Sowa wrote:
> Would it make sense to include used socket backlog in udp socket lookup
> compute_score calculation? Just want to throw out the idea, I actually
> could imagine to also cause bad side effects.
Hopefully we can get rid of the backlog
On Thu, Sep 01, 2016 at 02:38:59PM +0200, Jesper Dangaard Brouer wrote:
> On Thu, 1 Sep 2016 14:29:25 +0200
> Jesper Dangaard Brouer wrote:
>
> > On Thu, 1 Sep 2016 13:53:56 +0200
> > Peter Zijlstra wrote:
> >
> > > On Thu, Sep 01, 2016 at 01:02:31PM +0200, Jesper Dangaard Brouer wrote:
> > >
On Thu, 1 Sep 2016 14:29:25 +0200
Jesper Dangaard Brouer wrote:
> On Thu, 1 Sep 2016 13:53:56 +0200
> Peter Zijlstra wrote:
>
> > On Thu, Sep 01, 2016 at 01:02:31PM +0200, Jesper Dangaard Brouer wrote:
> > >PID S %CPU TIME+ COMMAND
> > > 3 R 50.0 29:02.23 ksoftirqd/0
> > >
On Thu, 1 Sep 2016 13:53:56 +0200
Peter Zijlstra wrote:
> On Thu, Sep 01, 2016 at 01:02:31PM +0200, Jesper Dangaard Brouer wrote:
> >PID S %CPU TIME+ COMMAND
> > 3 R 50.0 29:02.23 ksoftirqd/0
> > 10881 R 10.7 1:01.61 udp_sink
> > 10837 R 10.0 1:05.20 udp_sink
> >
On 31.08.2016 22:42, Eric Dumazet wrote:
> On Wed, 2016-08-31 at 21:40 +0200, Jesper Dangaard Brouer wrote:
>
>> I can confirm the improvement of approx 900Kpps (no wonder people have
>> been complaining about DoS against UDP/DNS servers).
>>
>> BUT during my extensive testing, of this patch, I al
On 31.08.2016 19:42, Eric Dumazet wrote:
> From: Eric Dumazet
>
> A while back, Paolo and Hannes sent an RFC patch adding threaded-able
> napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/)
>
> The problem seems to be that softirqs are very aggressive and are often
> handled b
On Thu, Sep 01, 2016 at 01:02:31PM +0200, Jesper Dangaard Brouer wrote:
>PID S %CPU TIME+ COMMAND
> 3 R 50.0 29:02.23 ksoftirqd/0
> 10881 R 10.7 1:01.61 udp_sink
> 10837 R 10.0 1:05.20 udp_sink
> 10852 S 10.0 1:01.78 udp_sink
> 10862 R 10.0 1:05.19 udp_si
On 01.09.2016 13:02, Jesper Dangaard Brouer wrote:
> On Wed, 31 Aug 2016 23:51:16 +0200
> Jesper Dangaard Brouer wrote:
>
>> On Wed, 31 Aug 2016 13:42:30 -0700
>> Eric Dumazet wrote:
>>
>>> On Wed, 2016-08-31 at 21:40 +0200, Jesper Dangaard Brouer wrote:
>>>
I can confirm the improvement
On Wed, 31 Aug 2016 23:51:16 +0200
Jesper Dangaard Brouer wrote:
> On Wed, 31 Aug 2016 13:42:30 -0700
> Eric Dumazet wrote:
>
> > On Wed, 2016-08-31 at 21:40 +0200, Jesper Dangaard Brouer wrote:
> >
> > > I can confirm the improvement of approx 900Kpps (no wonder people have
> > > been compl
On Wed, 31 Aug 2016 16:29:56 -0700 Rick Jones wrote:
> On 08/31/2016 04:11 PM, Eric Dumazet wrote:
> > On Wed, 2016-08-31 at 15:47 -0700, Rick Jones wrote:
> >> With regard to drops, are both of you sure you're using the same socket
> >> buffer sizes?
> >
> > Does it really matter ?
>
> At
On 08/31/2016 04:11 PM, Eric Dumazet wrote:
On Wed, 2016-08-31 at 15:47 -0700, Rick Jones wrote:
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
Does it really matter ?
At least at points in the past I have seen different drop counts at the
SO_RCVBUF ba
On Wed, 2016-08-31 at 15:47 -0700, Rick Jones wrote:
> With regard to drops, are both of you sure you're using the same socket
> buffer sizes?
Does it really matter ?
I used the standard /proc/sys/net/core/rmem_default, but under flood
receive queue is almost always full, even if you make it big
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
In the meantime, is anything interesting happening with TCP_RR or
TCP_STREAM?
happy benchmarking,
rick jones
On Wed, 2016-08-31 at 23:51 +0200, Jesper Dangaard Brouer wrote:
>
> The result from this run were handling 1,517,248 pps, without any
> drops, all processes pinned to the same CPU.
>
> $ nstat > /dev/null && sleep 1 && nstat
> #kernel
> IpInReceives15172250.0
On Wed, 31 Aug 2016 13:42:30 -0700
Eric Dumazet wrote:
> On Wed, 2016-08-31 at 21:40 +0200, Jesper Dangaard Brouer wrote:
>
> > I can confirm the improvement of approx 900Kpps (no wonder people have
> > been complaining about DoS against UDP/DNS servers).
> >
> > BUT during my extensive testing
On Wed, 2016-08-31 at 21:40 +0200, Jesper Dangaard Brouer wrote:
> I can confirm the improvement of approx 900Kpps (no wonder people have
> been complaining about DoS against UDP/DNS servers).
>
> BUT during my extensive testing, of this patch, I also think that we
> have not gotten to the bottom
On Wed, 31 Aug 2016 10:42:29 -0700
Eric Dumazet wrote:
> From: Eric Dumazet
>
> A while back, Paolo and Hannes sent an RFC patch adding threaded-able
> napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/)
>
> The problem seems to be that softirqs are very aggressive and are o
From: Eric Dumazet
A while back, Paolo and Hannes sent an RFC patch adding threaded-able
napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/)
The problem seems to be that softirqs are very aggressive and are often
handled by the current process, even if we are under stress and
30 matches
Mail list logo