Herbert Xu wrote:
> On Sat, Dec 15, 2007 at 11:08:58AM +0100, Tobias Diedrich wrote:
> >
> > Hmm, how do I look for that, if netstat doesn't look suspicous?
>
> Thanks. What does /proc/net/sockstat show?
[EMAIL PROTECTED]:~$ cat /proc/net/sockstat
sockets: used 143
TCP: inuse 16 orphan 0 tw 4 al
On Sat, Dec 15, 2007 at 11:08:58AM +0100, Tobias Diedrich wrote:
>
> Hmm, how do I look for that, if netstat doesn't look suspicous?
Thanks. What does /proc/net/sockstat show?
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]>
Home Page: http:
Herbert Xu wrote:
> Tobias Diedrich <[EMAIL PROTECTED]> wrote:
> >
> > Meanwhile I added a slab statistic rrd script. Nothing obvious to
> > see on ari or yumi yet, but on oni (which after all is the most
> > affected by this) I can see 'size_2048' and 'TCPv6' growing
> > steadily along with the ro
Tobias Diedrich <[EMAIL PROTECTED]> wrote:
>
> Meanwhile I added a slab statistic rrd script. Nothing obvious to
> see on ari or yumi yet, but on oni (which after all is the most
> affected by this) I can see 'size_2048' and 'TCPv6' growing
> steadily along with the route cache size (Presumably 'ip
> > > > I suspect I'm seeing a slow dst cache leakage on one of my servers.
> > > > The server in question (oni) regularly needs to be rebooted, because
> > > > it loses network connectivity. However, netconsole and syslog shows
> > > > that the
> &g
ne of my servers.
> > > The server in question (oni) regularly needs to be rebooted, because
> > > it loses network connectivity. However, netconsole and syslog shows that
> > > the
> > > machine is still running and the kernel complains about "dst cac
rk connectivity. However, netconsole and syslog shows that the
> > machine is still running and the kernel complains about "dst cache
> > overflow".
> >
> > I have since installed a monitoring script, which stores the output of
> > both "ip route ls cache
ver, netconsole and syslog shows that the
> machine is still running and the kernel complains about "dst cache
> overflow".
>
> I have since installed a monitoring script, which stores the output of
> both "ip route ls cache | fgrep cache | wc -l" and the 'en
Hello,
I suspect I'm seeing a slow dst cache leakage on one of my servers.
The server in question (oni) regularly needs to be rebooted, because
it loses network connectivity. However, netconsole and syslog shows that the
machine is still running and the kernel complains about "dst cach
[EMAIL PROTECTED] a écrit :
grep . /proc/sys/net/ipv4/route/*
/proc/sys/net/ipv4/route/error_burst:5000
/proc/sys/net/ipv4/route/error_cost:1000
grep: /proc/sys/net/ipv4/route/flush: Invalid argument
/proc/sys/net/ipv4/route/gc_elasticity:8
/proc/sys/net/ipv4/route/gc_interval:60
/proc/sys/net/ipv
demand, an
d
the proccess ksoftirqd/0 randomally starts to use 100% of 0 cpu in nor
mal
situation and one time when the ksoftirqd/0 became crazy i noticed dst
cache overflow messages in syslog but there are more of thies lines in
logs about 5 times in 10 days period
There was a problem fixed i
is
used at
about 0.4% and additional 12% by ices when encoding mp3 on demand, and
the proccess ksoftirqd/0 randomally starts to use 100% of 0 cpu in
normal
situation and one time when the ksoftirqd/0 became crazy i noticed dst
cache overflow messages in syslog but there are more of thies li
nd one time when the ksoftirqd/0 became crazy i noticed dst
cache overflow messages in syslog but there are more of thies lines in
logs about 5 times in 10 days period
There was a problem fixed in the handling of fragments which caused dst
cache overflow in the 2.6.11-rc series. Are you still
/0 became crazy i noticed dst
> cache overflow messages in syslog but there are more of thies lines in
> logs about 5 times in 10 days period
There was a problem fixed in the handling of fragments which caused dst
cache overflow in the 2.6.11-rc series. Are you still seeing dst cache
overflo
ed at
about 0.4% and additional 12% by ices when encoding mp3 on demand, and
the proccess ksoftirqd/0 randomally starts to use 100% of 0 cpu in normal
situation and one time when the ksoftirqd/0 became crazy i noticed dst
cache overflow messages in syslog but there are more of thies lines in
logs
t; migration/0 and event/0. and in syslog i found thies lines
>
> Mar 20 22:21:09 buakaw kernel: printk: 5543 messages suppressed.
> Mar 20 22:21:09 buakaw kernel: dst cache overflow
>
> what can cause this?
Could you please describe the workload? What is the computer doing
On Mon, Feb 21, 2005 at 02:21:50PM +0100, Piotr Kowalczyk wrote:
> Hi all,
>
> I'm suffering from destination cache overflow on router running kernel
> 2.6.10. This wouldn't be anything special if not different numbers
> reported by slabinfo and the real state. It's worth to mention that
> ther
Hi all,
I'm suffering from destination cache overflow on router running kernel
2.6.10. This wouldn't be anything special if not different numbers
reported by slabinfo and the real state. It's worth to mention that
there was no problems with old 2.4.x here.
[EMAIL PROTECTED]:~$ cat /proc/slabinf
On Thu, Oct 12, 2000 at 12:39:09PM -0400, Ed Taranto wrote:
> Anyone out there have any further information or insight into this?
You can check the routes in the rtcache by using route --cache. Maybe
you can see a pattern.
>
> One thing that concerns me is that rt_garbage_collect will only ma
I'm running kernel version 2.2.14 as a firewall with moderately high load.
After a few hours, i start getting "dst cache overflow" log messages. That
apparently comes from rt_garbage_collect in route.c
I have seen a few discussions about this in the archives, but nothing
below logs appear in the system logger:
Oct 3 12:14:38 onion kernel: dst cache overflow
Oct 3 12:14:38 onion last message repeated 9 times
Oct 3 12:14:43 onion kernel: NET: 486 messages suppressed.
Oct 3 12:14:43 onion kernel: dst cache overflow
Oct 3 12:14:48 onion kernel: RPC: sendmsg
I have scoured the list archives for several hours seeing several
references over the past year about instances of rampant "dst cache
overflow" messages. There are posts from January and June relating
difficulties that individuals have had with this messages, including
replies in Ja
22 matches
Mail list logo