Hello,

just curious: what problem do you want to correct? 8GB coredump is surely a big file but so is ulimit -s 32768. This ulimit means 32768 x 1024 bytes for stack as you probably know and this is the exact amount which is shown in the coredump (33.554.432 = 32768x1024).

Regards

Stefan Kell

On Sat, 23 Feb 2008, Alexander Nasonov wrote:

Hi,
If I set a core limit to "unlimited" and a stack limit to 32768,
then run a program with indefinite recursion, the system would
generate 8G coredump file.

Here we go:

$ uname -a
OpenBSD obx1000 4.2 GENERIC#375 i386
$ ulimit -a
time(cpu-seconds)    unlimited
file(blocks)         unlimited
coredump(blocks)     unlimited
data(kbytes)         524288
stack(kbytes)        4096
lockedmem(kbytes)    166296
memory(kbytes)       497556
nofiles(descriptors) 128
processes            64
$ cat -n x.c
    1  void recursive(int i) { recursive(i+1); }
    2  int main() { recursive(0); }
    3
$ gcc x.c -o x
$ ./x
Segmentation fault (core dumped)
$ ls -lsh x.core
230176 -rw-------  1 alnsn  wheel   112M Feb 23 12:35 x.core
$ ulimit -s 32768
$ ./x

Wait 7-8 minutes ....

$ ./x
Segmentation fault (core dumped)
$ ls -lsh x.core
16809024 -rw-------  1 alnsn  wheel   8.0G Feb 23 12:45 x.core


I wrote a program that shows all core segments written to the core file.

Each line after a header has the following format:
CORE_STACK   coreseg.c_size @ coreseg.c_addr

nseg=507
text=4096
data=12288
stack=33554432
CORE_CPU        180 @ 0x0
CORE_DATA       12288 @ 0x224f7000
CORE_DATA       4096 @ 0x224fc000
CORE_DATA       135168 @ 0x224fd000
CORE_DATA       4096 @ 0x26f34000
CORE_DATA       8192 @ 0x26f36000
CORE_DATA       4096 @ 0x3c001000
CORE_DATA       4096 @ 0x3c003000
CORE_DATA       4096 @ 0x884fe000
CORE_STACK      991232 @ 0xcdbfe000
CORE_STACK      1056768 @ 0xcdbfe000
CORE_STACK      1122304 @ 0xcdbfe000

... 492 CORE_STACK lines @ 0xcdbfe000 ...

CORE_STACK      33431552 @ 0xcdbfe000
CORE_STACK      33497088 @ 0xcdbfe000
CORE_STACK      33554432 @ 0xcdbfe000


So, first 991232 bytes at 0xcdbfe000 had been written to the core file
496 times, 65536 bytes at 0xcdbfe000+991232 - 495 times and so on.

Analysis of uvm_coredump in uvm/uvm_unix.cc revealed that

1.1  (art    26-Feb-99):if (start >= (vaddr_t)vm->vm_max saddr) {
1.29 (martin 01-Sep-07):    start = trunc_page(USRSTACK - ptoa(vm->vm_ssize));

which is pretty old code

annotate -r 1.28
1.3 (mickey  20-Jul-99):    start = trunc_page(USRSTACK - ctob(vm->vm_ssize));

BTW, there is file size check in coredump() but I don't think
that uvm_coredump behavior was taken into account.

The patch below is checking a limit as it is writing to the file.
It doesn't help in my case because I set a limit to "unlimited" but
it could be useful until a better patch is available.

The patch is for -stable:

Index: uvm/uvm_unix.c
===================================================================
RCS file: /cvs/src/sys/uvm/uvm_unix.c,v
retrieving revision 1.28
diff -u -r1.28 uvm_unix.c
--- uvm/uvm_unix.c      11 Apr 2007 12:51:51 -0000      1.28
+++ uvm/uvm_unix.c      23 Feb 2008 13:41:45 -0000
@@ -190,6 +190,7 @@
        struct coreseg cseg;
        off_t offset;
        int flag, error = 0;
+       rlim_t rlim = p->p_rlimit[RLIMIT_CORE].rlim_cur;

        offset = chdr->c_hdrsize + chdr->c_seghdrsize + chdr->c_cpusize;

@@ -244,6 +245,9 @@
                cseg.c_addr = start;
                cseg.c_size = end - start;

+               if(offset > rlim - chdr->c_seghdrsize)
+                       return (EFBIG);
+
                error = vn_rdwr(UIO_WRITE, vp,
                    (caddr_t)&cseg, chdr->c_seghdrsize,
                    offset, UIO_SYSSPACE,
@@ -256,6 +260,9 @@
                        break;

                offset += chdr->c_seghdrsize;
+               if(rlim < cseg.c_size || offset > rlim - cseg.c_size)
+                       return (EFBIG);
+
                error = vn_rdwr(UIO_WRITE, vp,
                    (caddr_t)(u_long)cseg.c_addr, (int)cseg.c_size,
                    offset, UIO_USERSPACE,

--
Alexander Nasonov

Reply via email to