On Tue, 2014-02-04 at 10:57 -0800, Eric W. Biederman wrote:
> As I have heard it described one tcp connection per small requestion,
> and someone goofed and started creating new connections when the server
> was bogged down. But since all of the requests and replies were small I
> don't expect ev
On Tue, 4 Feb 2014, Eric W. Biederman wrote:
> My gut feel says if there is a code path that has __GFP_NOWARN and
> because of PAGE_ALLOC_COSTLY_ORDER we loop forever then there is
> something fishy going on.
>
The __GFP_NOWARN without __GFP_NORETRY in alloc_fdmem() is pointless
because we alre
Eric Dumazet writes:
> On Tue, 2014-02-04 at 09:22 -0800, Eric W. Biederman wrote:
>
>> The two code paths below certainly look good canidates for having
>> __GFP_NORETRY added to them. The same issues I ran into with
>> alloc_fdmem are likely to show up there as well.
>
> Yes, this is what I th
Cong Wang writes:
> On Mon, Feb 3, 2014 at 9:26 PM, Eric W. Biederman
> wrote:
>> diff --git a/fs/file.c b/fs/file.c
>> index 771578b33fb6..db25c2bdfe46 100644
>> --- a/fs/file.c
>> +++ b/fs/file.c
>> @@ -34,7 +34,7 @@ static void *alloc_fdmem(size_t size)
>> * vmalloc() if the allocat
On Tue, 2014-02-04 at 09:22 -0800, Eric W. Biederman wrote:
> The two code paths below certainly look good canidates for having
> __GFP_NORETRY added to them. The same issues I ran into with
> alloc_fdmem are likely to show up there as well.
Yes, this is what I thought : a write into TCP socket
On Mon, Feb 3, 2014 at 9:26 PM, Eric W. Biederman wrote:
> diff --git a/fs/file.c b/fs/file.c
> index 771578b33fb6..db25c2bdfe46 100644
> --- a/fs/file.c
> +++ b/fs/file.c
> @@ -34,7 +34,7 @@ static void *alloc_fdmem(size_t size)
> * vmalloc() if the allocation size will be considered "la
Eric Dumazet writes:
> On Mon, 2014-02-03 at 21:26 -0800, Eric W. Biederman wrote:
>> Recently due to a spike in connections per second memcached on 3
>> separate boxes triggered the OOM killer from accept. At the time the
>> OOM killer was triggered there was 4GB out of 36GB free in zone 1. The
On Mon, 2014-02-03 at 21:26 -0800, Eric W. Biederman wrote:
> Recently due to a spike in connections per second memcached on 3
> separate boxes triggered the OOM killer from accept. At the time the
> OOM killer was triggered there was 4GB out of 36GB free in zone 1. The
> problem was that alloc_fd
Recently due to a spike in connections per second memcached on 3
separate boxes triggered the OOM killer from accept. At the time the
OOM killer was triggered there was 4GB out of 36GB free in zone 1. The
problem was that alloc_fdtable was allocating an order 3 page (32KiB) to
hold a bitmap, and
9 matches
Mail list logo