On Thursday 17 February 2005 08:38 pm, Badari Pulavarty wrote:
> > On Wednesday 16 February 2005 06:52 pm, Andrew Morton wrote:
> > > So it's probably an ndiswrapper bug?
> >
> > Andrew,
> > It looks like it is a kernel bug triggered by NdisWrapper. Without
> > NdisWrapper, and with just 8139too
On Thursday 17 February 2005 08:38 pm, Badari Pulavarty wrote:
On Wednesday 16 February 2005 06:52 pm, Andrew Morton wrote:
So it's probably an ndiswrapper bug?
Andrew,
It looks like it is a kernel bug triggered by NdisWrapper. Without
NdisWrapper, and with just 8139too plus some
On Thu, 2005-02-17 at 05:00, Parag Warudkar wrote:
> On Wednesday 16 February 2005 06:52 pm, Andrew Morton wrote:
> > So it's probably an ndiswrapper bug?
> Andrew,
> It looks like it is a kernel bug triggered by NdisWrapper. Without
> NdisWrapper, and with just 8139too plus some light network
On Thu, 17 Feb 2005, Parag Warudkar wrote:
>
> A question - is it safe to assume it is a kmalloc based leak? (I am thinking
> of tracking it down by using kprobes to insert a probe into __kmalloc and
> record the stack to see what is causing so many allocations.)
It's definitely
On Wednesday 16 February 2005 10:48 pm, Horst von Brand wrote:
> Does x86_64 use up a (freeable) register for the frame pointer or not?
> I.e., does -fomit-frame-pointer have any effect on the generated code?
{Took Linus out of the loop as he probably isn't interested}
The generated code is
On Wednesday 16 February 2005 06:52 pm, Andrew Morton wrote:
> So it's probably an ndiswrapper bug?
Andrew,
It looks like it is a kernel bug triggered by NdisWrapper. Without
NdisWrapper, and with just 8139too plus some light network activity the
size-64 grew from ~ 1100 to 4500 overnight. Is
On Wednesday 16 February 2005 06:52 pm, Andrew Morton wrote:
So it's probably an ndiswrapper bug?
Andrew,
It looks like it is a kernel bug triggered by NdisWrapper. Without
NdisWrapper, and with just 8139too plus some light network activity the
size-64 grew from ~ 1100 to 4500 overnight. Is
On Wednesday 16 February 2005 10:48 pm, Horst von Brand wrote:
Does x86_64 use up a (freeable) register for the frame pointer or not?
I.e., does -fomit-frame-pointer have any effect on the generated code?
{Took Linus out of the loop as he probably isn't interested}
The generated code is
On Thu, 17 Feb 2005, Parag Warudkar wrote:
A question - is it safe to assume it is a kmalloc based leak? (I am thinking
of tracking it down by using kprobes to insert a probe into __kmalloc and
record the stack to see what is causing so many allocations.)
It's definitely kmalloc-based,
On Thu, 2005-02-17 at 05:00, Parag Warudkar wrote:
On Wednesday 16 February 2005 06:52 pm, Andrew Morton wrote:
So it's probably an ndiswrapper bug?
Andrew,
It looks like it is a kernel bug triggered by NdisWrapper. Without
NdisWrapper, and with just 8139too plus some light network
Andrew Morton <[EMAIL PROTECTED]> said:
> Parag Warudkar <[EMAIL PROTECTED]> wrote:
[...]
> > Is there a reason X86_64 doesnt have CONFIG_FRAME_POINTER anywhere in
> > the .config?
> No good reason, I suspect.
Does x86_64 use up a (freeable) register for the frame pointer or not?
I.e., does
On Wednesday 16 February 2005 06:51 pm, Andrew Morton wrote:
> 81002fe8 is the address of the slab object. 08a8 is
> supposed to be the caller's text address. It appears that
> __builtin_return_address(0) is returning junk. Perhaps due to
> -fomit-frame-pointer.
I tried
Parag Warudkar <[EMAIL PROTECTED]> wrote:
>
> On Wednesday 16 February 2005 12:12 am, Andrew Morton wrote:
> > Plenty of moisture there.
> >
> > Could you please use this patch? Make sure that you enable
> > CONFIG_FRAME_POINTER (might not be needed for __builtin_return_address(0),
> > but let's
Parag Warudkar <[EMAIL PROTECTED]> wrote:
>
> On Wednesday 16 February 2005 12:12 am, Andrew Morton wrote:
> > echo "size-4096 0 0 0" > /proc/slabinfo
>
> Is there a reason X86_64 doesnt have CONFIG_FRAME_POINTER anywhere in
> the .config?
No good reason, I suspect.
> I tried -rc4 with
On Wednesday 16 February 2005 12:12 am, Andrew Morton wrote:
> Plenty of moisture there.
>
> Could you please use this patch? Make sure that you enable
> CONFIG_FRAME_POINTER (might not be needed for __builtin_return_address(0),
> but let's be sure). Also enable CONFIG_DEBUG_SLAB.
Will try that
On Wednesday 16 February 2005 12:12 am, Andrew Morton wrote:
Plenty of moisture there.
Could you please use this patch? Make sure that you enable
CONFIG_FRAME_POINTER (might not be needed for __builtin_return_address(0),
but let's be sure). Also enable CONFIG_DEBUG_SLAB.
Will try that out.
Parag Warudkar [EMAIL PROTECTED] wrote:
On Wednesday 16 February 2005 12:12 am, Andrew Morton wrote:
echo size-4096 0 0 0 /proc/slabinfo
Is there a reason X86_64 doesnt have CONFIG_FRAME_POINTER anywhere in
the .config?
No good reason, I suspect.
I tried -rc4 with Manfred's patch and
Parag Warudkar [EMAIL PROTECTED] wrote:
On Wednesday 16 February 2005 12:12 am, Andrew Morton wrote:
Plenty of moisture there.
Could you please use this patch? Make sure that you enable
CONFIG_FRAME_POINTER (might not be needed for __builtin_return_address(0),
but let's be sure).
On Wednesday 16 February 2005 06:51 pm, Andrew Morton wrote:
81002fe8 is the address of the slab object. 08a8 is
supposed to be the caller's text address. It appears that
__builtin_return_address(0) is returning junk. Perhaps due to
-fomit-frame-pointer.
I tried manually
Andrew Morton [EMAIL PROTECTED] said:
Parag Warudkar [EMAIL PROTECTED] wrote:
[...]
Is there a reason X86_64 doesnt have CONFIG_FRAME_POINTER anywhere in
the .config?
No good reason, I suspect.
Does x86_64 use up a (freeable) register for the frame pointer or not?
I.e., does
Parag Warudkar <[EMAIL PROTECTED]> wrote:
>
> I am running -rc3 on my AMD64 laptop and I noticed it becomes sluggish after
> use mainly due to growing swap use. It has 768M of RAM and a Gig of swap.
> After following this thread, I started monitoring /proc/slabinfo. It seems
> size-64 is
I am running -rc3 on my AMD64 laptop and I noticed it becomes sluggish after
use mainly due to growing swap use. It has 768M of RAM and a Gig of swap.
After following this thread, I started monitoring /proc/slabinfo. It seems
size-64 is continuously growing and doing a compile run seem to make
I am running -rc3 on my AMD64 laptop and I noticed it becomes sluggish after
use mainly due to growing swap use. It has 768M of RAM and a Gig of swap.
After following this thread, I started monitoring /proc/slabinfo. It seems
size-64 is continuously growing and doing a compile run seem to make
Parag Warudkar [EMAIL PROTECTED] wrote:
I am running -rc3 on my AMD64 laptop and I noticed it becomes sluggish after
use mainly due to growing swap use. It has 768M of RAM and a Gig of swap.
After following this thread, I started monitoring /proc/slabinfo. It seems
size-64 is continuously
On Mon, Feb 07, 2005 at 07:38:12AM -0800, Linus Torvalds wrote:
>
> Whee. You've got 5 _million_ bio's "active". Which account for about 750MB
> of your 860MB of slab usage.
Same situation here, at different rates on two different platforms,
both running same kernel build. Both show steadily
Jan Kasprzak wrote:
: I think I have been running 2.6.10-rc3 before. I've copied
: the fs/bio.c from 2.6.10-rc3 to my 2.6.11-rc2 sources and booted the
: resulting kernel. I hope it will not eat my filesystems :-) I will send
: my /proc/slabinfo in a few days.
Hmm, after 3h35min of
[EMAIL PROTECTED] wrote:
: My guess would be the clone change, if raid was not leaking before. I
: cannot lookup any patches at the moment, as I'm still at the hospital
: taking care of my new born baby and wife :)
Congratulations!
: But try and reverse the patches to fs/bio.c that
> Linus Torvalds wrote:
> : Jan - can you give Jens a bit of an idea of what drivers and/or
> schedulers
> : you're using?
>
> I have a Tyan S2882 dual Opteron, network is on-board tg3,
> there are 8 P-ATA HDDs hooked on 3ware 7506-8 controller (no HW RAID
> there, but the drives are
Linus Torvalds wrote:
: Jan - can you give Jens a bit of an idea of what drivers and/or schedulers
: you're using?
I have a Tyan S2882 dual Opteron, network is on-board tg3,
there are 8 P-ATA HDDs hooked on 3ware 7506-8 controller (no HW RAID
there, but the drives are partitioned and
On Mon, 7 Feb 2005, Jan Kasprzak wrote:
>
>The server has been running 2.6.11-rc2 + patch to fs/pipe.c
>for last 8 days.
>
> # cat /proc/meminfo
> MemTotal: 4045168 kB
> Cached:2861648 kB
> LowFree: 59396 kB
> Mapped: 206540 kB
> Slab: 861176 kB
Ok,
On Mon, Feb 07, 2005 at 12:00:30PM +0100, Jan Kasprzak wrote:
> Well, with Linus' patch to fs/pipe.c the situation seems to
> improve a bit, but some leak is still there (look at the "monthly" graph
> at the above URL). The server has been running 2.6.11-rc2 + patch to fs/pipe.c
> for last 8
: I've been running 2.6.11-rc1 on my dual opteron Fedora Core 3 box for a week
: now, and I think there is a memory leak somewhere. I am measuring the
: size of active and inactive pages (from /proc/meminfo), and it seems
: that the count of sum (active+inactive) pages is decreasing. Please
: take
: I've been running 2.6.11-rc1 on my dual opteron Fedora Core 3 box for a week
: now, and I think there is a memory leak somewhere. I am measuring the
: size of active and inactive pages (from /proc/meminfo), and it seems
: that the count of sum (active+inactive) pages is decreasing. Please
: take
On Mon, Feb 07, 2005 at 12:00:30PM +0100, Jan Kasprzak wrote:
Well, with Linus' patch to fs/pipe.c the situation seems to
improve a bit, but some leak is still there (look at the monthly graph
at the above URL). The server has been running 2.6.11-rc2 + patch to fs/pipe.c
for last 8 days.
On Mon, 7 Feb 2005, Jan Kasprzak wrote:
The server has been running 2.6.11-rc2 + patch to fs/pipe.c
for last 8 days.
# cat /proc/meminfo
MemTotal: 4045168 kB
Cached:2861648 kB
LowFree: 59396 kB
Mapped: 206540 kB
Slab: 861176 kB
Ok, pretty much
Linus Torvalds wrote:
: Jan - can you give Jens a bit of an idea of what drivers and/or schedulers
: you're using?
I have a Tyan S2882 dual Opteron, network is on-board tg3,
there are 8 P-ATA HDDs hooked on 3ware 7506-8 controller (no HW RAID
there, but the drives are partitioned and
Linus Torvalds wrote:
: Jan - can you give Jens a bit of an idea of what drivers and/or
schedulers
: you're using?
I have a Tyan S2882 dual Opteron, network is on-board tg3,
there are 8 P-ATA HDDs hooked on 3ware 7506-8 controller (no HW RAID
there, but the drives are partitioned and
[EMAIL PROTECTED] wrote:
: My guess would be the clone change, if raid was not leaking before. I
: cannot lookup any patches at the moment, as I'm still at the hospital
: taking care of my new born baby and wife :)
Congratulations!
: But try and reverse the patches to fs/bio.c that
Jan Kasprzak wrote:
: I think I have been running 2.6.10-rc3 before. I've copied
: the fs/bio.c from 2.6.10-rc3 to my 2.6.11-rc2 sources and booted the
: resulting kernel. I hope it will not eat my filesystems :-) I will send
: my /proc/slabinfo in a few days.
Hmm, after 3h35min of
On Mon, Feb 07, 2005 at 07:38:12AM -0800, Linus Torvalds wrote:
Whee. You've got 5 _million_ bio's active. Which account for about 750MB
of your 860MB of slab usage.
Same situation here, at different rates on two different platforms,
both running same kernel build. Both show steadily
On Wed, 2 Feb 2005, Dave Hansen wrote:
>
> Strangely enough, it seems to be one single, persistent page.
Ok. Almost certainly not a leak.
It's most likely the FIFO that "init" opens (/dev/initctl). FIFO's use the
pipe code too.
If you don't want unreclaimable highmem pages, then I
On Wed, 2005-02-02 at 10:27 -0800, Linus Torvalds wrote:
> How many of these pages do you see? It's normal for a single pipe to be
> associated with up to 16 pages (although that would only happen if there
> is no reader or a slow reader, which is obviously not very common).
Strangely enough,
On Wed, 2 Feb 2005, Dave Hansen wrote:
>
> In any case, I'm running a horribly hacked up kernel, but this is
> certainly a new problem, and not one that I've run into before. Here's
> output from the new CONFIG_PAGE_OWNER code:
Hmm.. Everything looks fine. One new thing about the pipe code is
I think there's still something funky going on in the pipe code, at
least in 2.6.11-rc2-mm2, which does contain the misordered __free_page()
fix in pipe.c. I'm noticing any leak pretty easily because I'm
attempting memory removal of highmem areas, and these apparently leaked
pipe pages the only
On Wed, 2 Feb 2005, Lennert Van Alboom wrote:
>
> I applied the patch and it works like a charm. As a kinky side effect: before
> this patch, using a compiled-in vesa or vga16 framebuffer worked with the
> proprietary nvidia driver, whereas now tty1-6 are corrupt when not using
> 80x25.
Positive, I only applied this single two-line change. I'm not capable of
messing with kernel code myself so I prefer not to. Probably just a lucky
shot that the vesa didn't go nuts with nvidia before... O well, with a bit
more o'those pharmaceutical drugs even this 80x25 doesn't look too bad.
I applied the patch and it works like a charm. As a kinky side effect: before
this patch, using a compiled-in vesa or vga16 framebuffer worked with the
proprietary nvidia driver, whereas now tty1-6 are corrupt when not using
80x25. Strangeness :)
Lennert
On Monday 24 January 2005 23:35, Linus
I applied the patch and it works like a charm. As a kinky side effect: before
this patch, using a compiled-in vesa or vga16 framebuffer worked with the
proprietary nvidia driver, whereas now tty1-6 are corrupt when not using
80x25. Strangeness :)
Lennert
On Monday 24 January 2005 23:35, Linus
Positive, I only applied this single two-line change. I'm not capable of
messing with kernel code myself so I prefer not to. Probably just a lucky
shot that the vesa didn't go nuts with nvidia before... O well, with a bit
more o'those pharmaceutical drugs even this 80x25 doesn't look too bad.
On Wed, 2 Feb 2005, Lennert Van Alboom wrote:
I applied the patch and it works like a charm. As a kinky side effect: before
this patch, using a compiled-in vesa or vga16 framebuffer worked with the
proprietary nvidia driver, whereas now tty1-6 are corrupt when not using
80x25.
I think there's still something funky going on in the pipe code, at
least in 2.6.11-rc2-mm2, which does contain the misordered __free_page()
fix in pipe.c. I'm noticing any leak pretty easily because I'm
attempting memory removal of highmem areas, and these apparently leaked
pipe pages the only
On Wed, 2 Feb 2005, Dave Hansen wrote:
In any case, I'm running a horribly hacked up kernel, but this is
certainly a new problem, and not one that I've run into before. Here's
output from the new CONFIG_PAGE_OWNER code:
Hmm.. Everything looks fine. One new thing about the pipe code is
On Wed, 2005-02-02 at 10:27 -0800, Linus Torvalds wrote:
How many of these pages do you see? It's normal for a single pipe to be
associated with up to 16 pages (although that would only happen if there
is no reader or a slow reader, which is obviously not very common).
Strangely enough, it
On Wed, 2 Feb 2005, Dave Hansen wrote:
Strangely enough, it seems to be one single, persistent page.
Ok. Almost certainly not a leak.
It's most likely the FIFO that init opens (/dev/initctl). FIFO's use the
pipe code too.
If you don't want unreclaimable highmem pages, then I suspect
(BHi,
(B
(BFrom: YOSHIFUJI Hideaki / [EMAIL PROTECTED](B <[EMAIL PROTECTED]>
(BDate: Mon, 31 Jan 2005 14:16:36 +0900 (JST)
(B
(B> In article <[EMAIL PROTECTED]> (at Mon, 31 Jan 2005 06:00:40 +0100), Patrick
(B> McHardy <[EMAIL PROTECTED]> says:
(B>
(B> |We don't need this for IPv6 yet.
On Sun, Jan 30, 2005 at 09:11:50PM -0800, David S. Miller wrote:
> On Mon, 31 Jan 2005 06:00:40 +0100
> Patrick McHardy <[EMAIL PROTECTED]> wrote:
>
> > We don't need this for IPv6 yet. Once we get nf_conntrack in we
> > might need this, but its IPv6 fragment handling is different from
> >
On Mon, 31 Jan 2005 06:00:40 +0100
Patrick McHardy <[EMAIL PROTECTED]> wrote:
> We don't need this for IPv6 yet. Once we get nf_conntrack in we
> might need this, but its IPv6 fragment handling is different from
> ip_conntrack, I need to check first.
Right, ipv6 netfilter cannot create this
In article <[EMAIL PROTECTED]> (at Mon, 31 Jan 2005 06:00:40 +0100), Patrick
McHardy <[EMAIL PROTECTED]> says:
|We don't need this for IPv6 yet. Once we get nf_conntrack in we
|might need this, but its IPv6 fragment handling is different from
|ip_conntrack, I need to check first.
Ok. It would
YOSHIFUJI Hideaki / [EMAIL PROTECTED] wrote:
In article <[EMAIL PROTECTED]> (at Mon, 31 Jan 2005 15:11:32 +1100), Herbert Xu
<[EMAIL PROTECTED]> says:
Patrick McHardy <[EMAIL PROTECTED]> wrote:
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they
In article <[EMAIL PROTECTED]> (at Mon, 31 Jan 2005 15:11:32 +1100), Herbert Xu
<[EMAIL PROTECTED]> says:
> Patrick McHardy <[EMAIL PROTECTED]> wrote:
> >
> > Ok, final decision: you are right :) conntrack also defragments locally
> > generated packets before they hit ip_fragment. In this case
Patrick McHardy <[EMAIL PROTECTED]> wrote:
>
> Ok, final decision: you are right :) conntrack also defragments locally
> generated packets before they hit ip_fragment. In this case the fragments
> have skb->dst set.
Well caught. The same thing is needed for IPv6, right?
--
Visit Openswan at
On Sun, 30 Jan 2005 18:58:27 +0100
Patrick McHardy <[EMAIL PROTECTED]> wrote:
> Ok, final decision: you are right :) conntrack also defragments locally
> generated packets before they hit ip_fragment. In this case the fragments
> have skb->dst set.
It's amazing how many bugs exist due to the
On Sun, Jan 30, 2005 at 06:58:27PM +0100, Patrick McHardy wrote:
> Patrick McHardy wrote:
> >> Russell King wrote:
> >>> I don't know if the code is using fragment lists in ip_fragment(), but
> >>> on reading the code a question comes to mind: if we have a list of
> >>> fragments, does each
On Sun, Jan 30, 2005 at 06:01:46PM +, Russell King wrote:
> > OTOH, if conntrack isn't loaded forwarded packet are never defragmented,
> > so frag_list should be empty. So probably false alarm, sorry.
>
> I've just checked Phil's mails - both Phil and myself are using
> netfilter on the
On Sun, Jan 30, 2005 at 06:26:29PM +0100, Patrick McHardy wrote:
> Patrick McHardy wrote:
>
> > Russell King wrote:
> >
> >> I don't know if the code is using fragment lists in ip_fragment(), but
> >> on reading the code a question comes to mind: if we have a list of
> >> fragments, does each
Patrick McHardy wrote:
Russell King wrote:
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If
Patrick McHardy wrote:
Russell King wrote:
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If
Russell King wrote:
I don't know if the code is using fragment lists in ip_fragment(), but
on reading the code a question comes to mind: if we have a list of
fragments, does each fragment skb have a valid (and refcounted) dst
pointer before ip_fragment() does it's job? If yes, then isn't the
On Sun, Jan 30, 2005 at 03:34:49PM +, Russell King wrote:
> I think the case against the IPv4 fragmentation code is mounting.
> However, without knowing what the expected conditions for this code,
> (eg, are skbs on the fraglist supposed to have NULL skb->dst?) I'm
> unable to progress this
On Sun, Jan 30, 2005 at 01:23:43PM +, Russell King wrote:
> Anyway, I've produced some code which keeps a record of the __refcnt
> increments and decrements, and I think it's produced some interesting
> results. Essentially, I'm seeing the odd dst entry with a __refcnt of
> 14000 or so (which
On Fri, Jan 28, 2005 at 08:58:59AM +, Russell King wrote:
> On Thu, Jan 27, 2005 at 04:34:44PM -0800, David S. Miller wrote:
> > On Fri, 28 Jan 2005 00:17:01 +
> > Russell King <[EMAIL PROTECTED]> wrote:
> > > Yes. Someone suggested this evening that there may have been a recent
> > >
On Sun, Jan 30, 2005 at 09:11:50PM -0800, David S. Miller wrote:
On Mon, 31 Jan 2005 06:00:40 +0100
Patrick McHardy [EMAIL PROTECTED] wrote:
We don't need this for IPv6 yet. Once we get nf_conntrack in we
might need this, but its IPv6 fragment handling is different from
ip_conntrack, I
(BHi,
(B
(BFrom: YOSHIFUJI Hideaki / [EMAIL PROTECTED](B [EMAIL PROTECTED]
(BDate: Mon, 31 Jan 2005 14:16:36 +0900 (JST)
(B
(B In article [EMAIL PROTECTED] (at Mon, 31 Jan 2005 06:00:40 +0100), Patrick
(B McHardy [EMAIL PROTECTED] says:
(B
(B |We don't need this for IPv6 yet. Once we
On Sun, 30 Jan 2005 18:58:27 +0100
Patrick McHardy [EMAIL PROTECTED] wrote:
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb-dst set.
It's amazing how many bugs exist due to the local
Patrick McHardy [EMAIL PROTECTED] wrote:
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
have skb-dst set.
Well caught. The same thing is needed for IPv6, right?
--
Visit Openswan at
In article [EMAIL PROTECTED] (at Mon, 31 Jan 2005 15:11:32 +1100), Herbert Xu
[EMAIL PROTECTED] says:
Patrick McHardy [EMAIL PROTECTED] wrote:
Ok, final decision: you are right :) conntrack also defragments locally
generated packets before they hit ip_fragment. In this case the fragments
On Thu, Jan 27, 2005 at 12:40:12PM -0800, Phil Oester wrote:
> Vanilla 2.6.10, though I've been seeing these problems since 2.6.8 or
> earlier.
Right. For me:
- 2.6.9-rc3 (installed 8th Oct) died with dst cache overflow on 29th November
- 2.6.10-rc2 (booted 29th Nov) died with the same on 19th
On Thu, Jan 27, 2005 at 04:34:44PM -0800, David S. Miller wrote:
> On Fri, 28 Jan 2005 00:17:01 +
> Russell King <[EMAIL PROTECTED]> wrote:
> > Yes. Someone suggested this evening that there may have been a recent
> > change to do with some IPv6 refcounting which may have caused this
> >
On Thu, Jan 27, 2005 at 04:34:44PM -0800, David S. Miller wrote:
On Fri, 28 Jan 2005 00:17:01 +
Russell King [EMAIL PROTECTED] wrote:
Yes. Someone suggested this evening that there may have been a recent
change to do with some IPv6 refcounting which may have caused this
problem. Is
On Thu, Jan 27, 2005 at 12:40:12PM -0800, Phil Oester wrote:
Vanilla 2.6.10, though I've been seeing these problems since 2.6.8 or
earlier.
Right. For me:
- 2.6.9-rc3 (installed 8th Oct) died with dst cache overflow on 29th November
- 2.6.10-rc2 (booted 29th Nov) died with the same on 19th
On Fri, Jan 28, 2005 at 12:17:01AM +, Russell King wrote:
> On Thu, Jan 27, 2005 at 12:33:26PM -0800, David S. Miller wrote:
> > So they won't be listed in /proc/net/rt_cache (since they've been
> > removed from the lookup table) but they will be accounted for in
> > /proc/net/stat/rt_cache
On Fri, 28 Jan 2005 00:17:01 +
Russell King <[EMAIL PROTECTED]> wrote:
> Yes. Someone suggested this evening that there may have been a recent
> change to do with some IPv6 refcounting which may have caused this
> problem. Is that something you can confirm?
Yep, it would be this change
On Thu, Jan 27, 2005 at 12:33:26PM -0800, David S. Miller wrote:
> So they won't be listed in /proc/net/rt_cache (since they've been
> removed from the lookup table) but they will be accounted for in
> /proc/net/stat/rt_cache until the final release is done on the
> routing cache object and it can
On Thu, Jan 27, 2005 at 07:25:04PM +, Russell King wrote:
> Can you provide some details, eg kernel configuration, loaded modules
> and a brief overview of any netfilter modules you may be using.
>
> Maybe we can work out what's common between our setups.
Vanilla 2.6.10, though I've been
On Thu, 27 Jan 2005 16:49:18 +
Russell King <[EMAIL PROTECTED]> wrote:
> notice how /proc/net/stat/rt_cache says there's 1336 entries in the
> route cache. _Where_ are they? They're not there according to
> /proc/net/rt_cache.
When the route cache is flushed, that kills a reference to each
On Thu, Jan 27, 2005 at 10:37:45AM -0800, Phil Oester wrote:
> On Thu, Jan 27, 2005 at 04:49:18PM +, Russell King wrote:
> > so obviously the GC does appear to be working - as can be seen from the
> > number of entries in /proc/net/rt_cache. However, the number of objects
> > in the slab
On Thu, Jan 27, 2005 at 04:49:18PM +, Russell King wrote:
> so obviously the GC does appear to be working - as can be seen from the
> number of entries in /proc/net/rt_cache. However, the number of objects
> in the slab cache does grow day on day. About 4 days ago, it was only
> about 600
On Thu, Jan 27, 2005 at 01:56:30PM +0100, Robert Olsson wrote:
>
> Andrew Morton writes:
> > Russell King <[EMAIL PROTECTED]> wrote:
>
> > > ip_dst_cache1292 1485256 151
>
> > I guess we should find a way to make it happen faster.
>
> Here is route DoS attack. Pure
Oh. Linux version 2.6.11-rc2 was used.
Robert Olsson writes:
>
> Andrew Morton writes:
> > Russell King <[EMAIL PROTECTED]> wrote:
>
> > > ip_dst_cache1292 1485256 151
>
> > I guess we should find a way to make it happen faster.
>
> Here is route DoS attack.
Andrew Morton writes:
> Russell King <[EMAIL PROTECTED]> wrote:
> > ip_dst_cache1292 1485256 151
> I guess we should find a way to make it happen faster.
Here is route DoS attack. Pure routing no NAT no filter.
Start
=
ip_dst_cache 5 30256 15
On Thu, 27 Jan 2005, Andrew Morton wrote:
> Russell King <[EMAIL PROTECTED]> wrote:
> >
> > This mornings magic numbers are:
> >
> > 3
> > ip_dst_cache1292 1485256 151
>
> I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
> 1.1.255.255. The ip_dst_cache grew
On Thu, 27 Jan 2005 00:47:32 -0800, Andrew Morton <[EMAIL PROTECTED]> wrote:
> Russell King <[EMAIL PROTECTED]> wrote:
> >
> > This mornings magic numbers are:
> >
> > 3
> > ip_dst_cache1292 1485256 151
>
> I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
>
Russell King <[EMAIL PROTECTED]> wrote:
>
> This mornings magic numbers are:
>
> 3
> ip_dst_cache1292 1485256 151
I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
1.1.255.255. The ip_dst_cache grew to ~15k entries and grew no further.
It's now gradually
On Tue, Jan 25, 2005 at 07:32:07PM +, Russell King wrote:
> On Mon, Jan 24, 2005 at 11:48:53AM +, Russell King wrote:
> > On Sun, Jan 23, 2005 at 08:03:15PM +, Russell King wrote:
> > > I think I may be seeing something odd here, maybe a possible memory leak.
> > > The only problem I
On Tue, Jan 25, 2005 at 07:32:07PM +, Russell King wrote:
On Mon, Jan 24, 2005 at 11:48:53AM +, Russell King wrote:
On Sun, Jan 23, 2005 at 08:03:15PM +, Russell King wrote:
I think I may be seeing something odd here, maybe a possible memory leak.
The only problem I have is
Russell King [EMAIL PROTECTED] wrote:
This mornings magic numbers are:
3
ip_dst_cache1292 1485256 151
I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
1.1.255.255. The ip_dst_cache grew to ~15k entries and grew no further.
It's now gradually
On Thu, 27 Jan 2005 00:47:32 -0800, Andrew Morton [EMAIL PROTECTED] wrote:
Russell King [EMAIL PROTECTED] wrote:
This mornings magic numbers are:
3
ip_dst_cache1292 1485256 151
I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
1.1.255.255. The
On Thu, 27 Jan 2005, Andrew Morton wrote:
Russell King [EMAIL PROTECTED] wrote:
This mornings magic numbers are:
3
ip_dst_cache1292 1485256 151
I just did a q-n-d test here: send one UDP frame to 1.1.1.1 up to
1.1.255.255. The ip_dst_cache grew to ~15k entries
Andrew Morton writes:
Russell King [EMAIL PROTECTED] wrote:
ip_dst_cache1292 1485256 151
I guess we should find a way to make it happen faster.
Here is route DoS attack. Pure routing no NAT no filter.
Start
=
ip_dst_cache 5 30256 151 :
Oh. Linux version 2.6.11-rc2 was used.
Robert Olsson writes:
Andrew Morton writes:
Russell King [EMAIL PROTECTED] wrote:
ip_dst_cache1292 1485256 151
I guess we should find a way to make it happen faster.
Here is route DoS attack. Pure routing
1 - 100 of 165 matches
Mail list logo