Hi,
I've got a crashdump which shows the following top-of-stack:
[ panic: vmem_xalloc(): size == 0 ]
unix:panic+0x1c(0x1204d28, 0x200007b8000, 0x0, 0x18a5428, 0x0, 0x0)
genunix: vmem_xalloc+0x8b0(0x18a5428, 0x0, 0x2000, 0x0, 0x0, 0x0, 0x0)
genunix: vmem_alloc+0x1d4(0x18a5428, 0x0, 0x1)
unix: segkmem_xalloc+0x28(0x18a5428, , 0x0, , 0x0?, 0x1064250, 0x18373a8)
unix: segkmem_alloc_vn+0xc0(0x18a5428, 0x2a1020db010, 0x1, , 0x80000)
genunix: vmem_xalloc+0x5e8(0x30000034000, 0xffffffffffffffff, 0x2000, 0x0, 0x0,
genunix: 0x0, 0x0)
genunix: vmem_alloc+0x1d4(0x30000034000, 0xffffffffffffffff, 0x1)
genunix: kmem_alloc+0x100(0xffffffffffffffff, 0x1)
[ ... ]
Now it seems an insane thing to try that, but kmem_alloc(9F) only says:
void *kmem_alloc(size_t size, int flag);
[ ... ]
DESCRIPTION
[ ... ]
assumed. flag determines whether the caller can sleep for
memory. KM_SLEEP allocations may sleep but are guaranteed to
succeed. KM_NOSLEEP allocations are guaranteed not to sleep
but may fail (return NULL) if no memory is currently avail-
able.
[ ... ]
WARNINGS
[ ... ]
Excessive use of kernel memory is likely to affect overall
system performance. Overcommitment of kernel memory will
cause the system to hang or panic.
So "size_t" is unsigned (and hence UINTMAX_MAX is a possible value for it), and
the use of KM_NOSLEEP should make it fail if there's not enough available,
making it impossible to overcommit.
Hence, from the description of kmem_alloc(9F), I'd deduce that no KM_NOSLEEP
request, no matter how ridiculous the size value, should ever be allowed to
panic/hang the system.
The driver I've found this in actually makes that assumption; it contains
guards that work like:
bufsize = <some externally derived input>
if (bufsize > threshold &&
(buf = kmem_alloc(bufsize, KM_NOSLEEP)) == NULL)
return ENOMEM;
else
buf = kmem_alloc(bufsize, KM_SLEEP);
I would want to avoid having to add:
if (bufsize > physmem)
return ENOMEM;
to that driver in order to perform yet another guards check; Should
kmem_alloc() be fixed ?
Thx,
FrankH.
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code