elf_core_dump() writes struct elfhdr, then a bunch of
elf_phdr.  It sums the sizes of data being written in 'size',
checking it against cprm->limit as it goes.

        So far, so good, but then it proceeds to write_note_info(),
which neither checks overflows nor contributes to size.  _Then_
it seeks to page boundary and proceeds to loop over the pages,
writing them to dump.  At that point it again starts to count sizes,
bump size and check it against cprm->limit (only for present pages,
though - absent ones are silently skipped, with lseek() done on
output and size not increased).

        In other words, the size of notes section is ignored for
RLIMIT_CORE purposes.  Is that intentional?  Looks like a bug to
me...  FWIW, POSIX says that limit is on the file size and demands
that write would stop at that length, with 0 meaning "suppress
coredump creation completely".  I'm not sure how the holes should
be treated (note, BTW, that when output goes into a pipe we do
feed zeroes into it for absent pages for obvious reasons, but we
do not count them towards the limit), but ignoring the notes doesn't
look intentional...

        Comments?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to