On Tue, 2018-09-18 at 12:19 -0400, Song Liu wrote:
> > On Sep 18, 2018, at 6:45 AM, Eric Dumazet
> > wrote:
> >
> > On Tue, Sep 18, 2018 at 1:41 AM Song Liu
> > wrote:
> > >
> > > We are debugging this issue that netconsole message triggers
> > > pegged softirq
> > > (ksoftirqd taking 100% CPU
On Wed, 2016-12-21 at 10:55 -0500, George Spelvin wrote:
> Actually, DJB just made a very relevant suggestion.
>
> As I've mentioned, the 32-bit performance problems are an x86-
> specific
> problem. ARM does very well, and other processors aren't bad at all.
>
> SipHash fits very nicely (and ru
On Wed, 2016-12-21 at 07:56 -0800, Eric Dumazet wrote:
> On Wed, 2016-12-21 at 15:42 +0100, Jason A. Donenfeld wrote:
> George said :
>
> > Cycles per byte on 1024 bytes of data:
> > Pentium Core 2 Ivy
> > 4 Duo Bridge
> > SipHash-2-4
On Wed, 2016-05-11 at 07:40 -0700, Eric Dumazet wrote:
> On Wed, May 11, 2016 at 6:13 AM, Hannes Frederic Sowa
> wrote:
>
> > This looks racy to me as the ksoftirqd could be in the progress to
> > stop
> > and we would miss another softirq invocation.
>
> Looking at smpboot_thread_fn(), it looks
On Tue, 2016-05-10 at 14:53 -0700, Eric Dumazet wrote:
> On Tue, 2016-05-10 at 17:35 -0400, Rik van Riel wrote:
>
> >
> > You might need another one of these in invoke_softirq()
> >
> Excellent.
>
> I gave it a quick try (without your suggestion), and host
On Tue, 2016-05-10 at 14:31 -0700, Eric Dumazet wrote:
> On Tue, 2016-05-10 at 14:09 -0700, Eric Dumazet wrote:
> >
> > On Tue, May 10, 2016 at 1:46 PM, Hannes Frederic Sowa
> > wrote:
> >
> > >
> > > I agree here, but I don't think this patch particularly is a lot
> > > of
> > > bloat and some
On Tue, 2016-05-10 at 16:52 -0400, David Miller wrote:
> From: Rik van Riel
> Date: Tue, 10 May 2016 16:50:56 -0400
>
> > On Tue, 2016-05-10 at 16:45 -0400, David Miller wrote:
> >> From: Paolo Abeni
> >> Date: Tue, 10 May 2016 22:22:50 +0200
> >>
>
On Tue, 2016-05-10 at 16:45 -0400, David Miller wrote:
> From: Paolo Abeni
> Date: Tue, 10 May 2016 22:22:50 +0200
>
> > On Tue, 2016-05-10 at 09:08 -0700, Eric Dumazet wrote:
> >> On Tue, 2016-05-10 at 18:03 +0200, Paolo Abeni wrote:
> >>
> >> > If a single core host is under network flood, i.e
On Thu, 2016-04-07 at 08:48 -0700, Chuck Lever wrote:
> >
> > On Apr 7, 2016, at 7:38 AM, Christoph Hellwig
> > wrote:
> >
> > This is also very interesting for storage targets, which face the
> > same
> > issue. SCST has a mode where it caches some fully constructed
> > SGLs,
> > which is prob
On Tue, 4 Dec 2007 21:17:01 -0800
Andrew Morton <[EMAIL PROTECTED]> wrote:
> Changes since 2.6.24-rc3-mm2:
2.6.24-rc4-mm1 brought a nice TCP oops on my x86_64 system, while I
was stress-testing the VM and watching via ssh:
general protection fault: [1] SMP
last sysfs file: /sys/devices/pci
Evgeniy Polyakov wrote:
On Sat, Jan 20, 2007 at 05:36:03PM -0500, Rik van Riel ([EMAIL PROTECTED])
wrote:
Evgeniy Polyakov wrote:
On Fri, Jan 19, 2007 at 01:53:15PM +0100, Peter Zijlstra
([EMAIL PROTECTED]) wrote:
Even further development of such idea is to prevent such OOM condition
at all
Evgeniy Polyakov wrote:
On Fri, Jan 19, 2007 at 01:53:15PM +0100, Peter Zijlstra ([EMAIL PROTECTED])
wrote:
Even further development of such idea is to prevent such OOM condition
at all - by starting swapping early (but wisely) and reduce memory
usage.
These just postpone execution but will
Christoph Lameter wrote:
On Fri, 25 Aug 2006, Peter Zijlstra wrote:
The basic premises is that network sockets serving the VM need undisturbed
functionality in the face of severe memory shortage.
This patch-set provides the framework to provide this.
Hmmm.. Is it not possible to avoid the me
Andrew Morton wrote:
- We expect that the lots-of-dirty-anon-memory-over-swap-over-network
scenario might still cause deadlocks.
I assert that this can be solved by putting swap on local disks. Peter
asserts that this isn't acceptable due to disk unreliability. I point
out that loc
Daniel Phillips wrote:
Andrew Morton wrote:
Daniel Phillips <[EMAIL PROTECTED]> wrote:
What happened to the case where we just fill memory full of dirty file
pages backed by a remote disk?
Processes which are dirtying those pages throttle at
/proc/sys/vm/dirty_ratio% of memory dirty. So it i
Herbert Xu wrote:
Rik van Riel <[EMAIL PROTECTED]> wrote:
That should not be any problem, since skb's (including cowed ones)
are short lived anyway. Allocating a little bit more memory is
fine when we have a guarantee that the memory will be freed again
shortly.
I'm no
Evgeniy Polyakov wrote:
On Sun, Aug 13, 2006 at 05:42:47PM -0700, Daniel Phillips ([EMAIL PROTECTED])
wrote:
As for sk_buff cow break, we need to look at which network paths do it
(netfilter obviously, probably others) and decide whether we just want
to declare that the feature breaks network
David Miller wrote:
From: Peter Zijlstra <[EMAIL PROTECTED]>
Date: Sat, 12 Aug 2006 12:18:07 +0200
65535 sockets * 128 packets * 16384 bytes/packet =
1^16 * 1^7 * 1^14 = 1^(16+7+14) = 1^37 = 128G of memory per IP
And systems with a lot of IP numbers are not unthinkable.
TCP restricts the am
Evgeniy Polyakov wrote:
On Sat, Aug 12, 2006 at 10:40:23AM -0400, Rik van Riel ([EMAIL PROTECTED])
wrote:
Evgeniy Polyakov wrote:
On Sat, Aug 12, 2006 at 11:19:49AM +0200, Peter Zijlstra
([EMAIL PROTECTED]) wrote:
As you described above, memory for each packet must be allocated (either
Evgeniy Polyakov wrote:
On Sat, Aug 12, 2006 at 11:19:49AM +0200, Peter Zijlstra ([EMAIL PROTECTED])
wrote:
As you described above, memory for each packet must be allocated (either
from SLAB or from reserve), so network needs special allocator in OOM
condition, and that allocator should be sepa
Peter Zijlstra wrote:
You say "critical resource isolation", but it is not the case - consider
NFS over UDP - remote side will not stop sending just because receiving
socket code drops data due to OOM, or IPsec or compression, which can
requires reallocation. There is no "critical resource iso
Thomas Graf wrote:
skb->dev is not guaranteed to still point to the "allocating" device
once the skb is freed again so reserve/unreserve isn't symmetric.
You'd need skb->alloc_dev or something.
There's another consequence of this property of the network
stack.
Every network interface must be
22 matches
Mail list logo