> On Thu, 27 May 1999, Kevin Day wrote:
> 
> > I've got two systems that panic about every 48 hours, saying they're out of
> > mbuf's. I've tried raising maxusers. (It's at 128 now, but i've gone up to
> > 256 and still seen the same thing).
> >
> > I believe it's a leak, since it's pretty consistant how long it will stay up
> > before it runs out.
> > 
> > I've tried raising NBMCLUSTERs, but this just seems to prolong it before it
> > finally panic's.
> 
> How high do you have it set?
> 

I tried doubling whatever it was that putting maxusers at 256 set it at. (I
can get the exact number later). I'm running with no NMBCLUSTERS setting,
just with maxusers at 128 at the moment.

> You might want to collect some netstat -m stats as time goes on.  In
> addition to being easier to read, it may give you some hints as to how
> high you want to go with mbuf clusters.

I added a cron job to to netstat -m every half hour... Right now, after 10
hours of being up:

494/2624 mbufs in use:
        160 mbufs allocated to data
        334 mbufs allocated to packet headers
130/1686/2560 mbuf clusters in use (current/peak/max)
3700 Kbytes allocated to network (8% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

> > The only unusual thing about these two machines are that they're very heavy
> > NFS client users.
> 
> That might do it by itself irrespective of any bugs.
> 

This didn't happen in 2.2.8 or 3.1, so I'm trying to figure out what's
causing it. :)

Here's a typical panic:

IdlePTD 3096576
initial pcb at 27ea40
panicstr: Out of mbuf clusters
panic messages:
---
panic: Out of mbuf clusters

syncing disks... panic: Out of mbuf clusters

dumping to dev 20001, offset 467137
dump 255 254 253 252 251 250 249 248 247 246 245 244 243 242 241 240 239 238
237 236 235 234 233 232 231 230 229 228 227 226 225 224 223 222 221 220 219
218 217 216 215 214 213 212 211 210 209 208 207 206 205 204 203 202 201 200
199 198 197 196 195 194 193 192 191 190 189 188 187 186 185 184 183 182 181
180 179 178 177 176 175 174 173 172 171 170 169 168 167 166 165 164 163 162
161 160 159 158 157 156 155 154 153 152 151 150 149 148 147 146 145 144 143
142 141 140 139 138 137 136 135 134 133 132 131 130 129 128 127 126 125 124
123 122 121 120 119 118 117 116 115 114 113 112 111 110 109 108 107 106 105
104 103 102 101 100 99 98 97 96 95 94 93 92 91 90 89 88 87 86 85 84 83 82 81
80 79 78 77 76 75 74 73 72 71 70 69 68 67 66 65 64 63 62 61 60 59 58 57 56
55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31
30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3
2 1 
---
#0  boot (howto=260) at ../../kern/kern_shutdown.c:288
288                     dumppcb.pcb_cr3 = rcr3();
(kgdb) bt
#0  boot (howto=260) at ../../kern/kern_shutdown.c:288
#1  0xc0145755 in panic () at ../../kern/kern_shutdown.c:450
#2  0xc015c2ca in m_retry (i=0, t=1) at ../../kern/uipc_mbuf.c:269
#3  0xc01b2c77 in nfsm_reqh (vp=0xcb6bddc0, procid=21, hsiz=68, 
    bposp=0xcb2cecdc) at ../../nfs/nfs_subs.c:599
#4  0xc01c8e13 in nfs_commit (vp=0xcb6bddc0, offset=0, cnt=8192, 
    cred=0xc13b0200, procp=0xc02955a0) at ../../nfs/nfs_vnops.c:2580
#5  0xc01c9620 in nfs_flush (vp=0xcb6bddc0, cred=0xc0a5f900, waitfor=2, 
    p=0xc02955a0, commit=1) at ../../nfs/nfs_vnops.c:2846
#6  0xc01c9389 in nfs_fsync (ap=0xcb2cedfc) at ../../nfs/nfs_vnops.c:2710
#7  0xc01b9489 in nfs_sync (mp=0xc113bc00, waitfor=2, cred=0xc0a5f900, 
    p=0xc02955a0) at vnode_if.h:499
#8  0xc016ceaf in sync (p=0xc02955a0, uap=0x0) at
../../kern/vfs_syscalls.c:543
#9  0xc014535a in boot (howto=256) at ../../kern/kern_shutdown.c:205
#10 0xc0145755 in panic () at ../../kern/kern_shutdown.c:450
#11 0xc015c382 in m_retryhdr (i=0, t=1) at ../../kern/uipc_mbuf.c:297
#12 0xc015de2b in sosend (so=0xc9a55000, addr=0x0, uio=0xcb2cef00, top=0x0, 
    control=0x0, flags=0, p=0xcb2692e0) at ../../kern/uipc_socket.c:499
#13 0xc016093f in sendit (p=0xcb2692e0, s=5, mp=0xcb2cef40, flags=0)
    at ../../kern/uipc_syscalls.c:514
#14 0xc0160a2d in sendto (p=0xcb2692e0, uap=0xcb2cef90)
    at ../../kern/uipc_syscalls.c:564
#15 0xc020ec26 in syscall (frame={tf_fs = 47, tf_es = 47, tf_ds = 47, 
      tf_edi = -1077951456, tf_esi = 0, tf_ebp = -1077951564, 
      tf_isp = -886247452, tf_ebx = 538075232, tf_edx = 682064, tf_ecx = 0, 
      tf_eax = 133, tf_trapno = 7, tf_err = 7, tf_eip = 537941473, tf_cs =
31, 
      tf_eflags = 534, tf_esp = -1077951596, tf_ss = 47})
    at ../../i386/i386/trap.c:1066
#16 0x7 in ?? ()
(kgdb) 
#10 0xc0145755 in panic () at ../../kern/kern_shutdown.c:450
450             boot(bootopt);
(kgdb) 
#11 0xc015c382 in m_retryhdr (i=0, t=1) at ../../kern/uipc_mbuf.c:297
297                             panic("Out of mbuf clusters");


Looks like it panic'ed, tried to sync, which caused another panic.

Looking at the core dump with vmstat -M doesn't show really anything that
interesting, FFS Node is using the most memory, and that's only 1.3M.

Any further digging anyone would like?


To Unsubscribe: send mail to majord...@freebsd.org
with "unsubscribe freebsd-current" in the body of the message

Reply via email to