On Tue, 2012-04-17 at 10:40 +0200, Folkert van Heusden wrote:
> I think I found it.
> When developing this application I decided that it would be cleaner to
> use unsigned ints when applicable.
> That's indeed very clean but a hell when fixing bugs: when something
> goes wrong and a negative value is put in such an unsigned integer, a
> very large positive value is set in reality. For example: unsigned
> short x = -1; would give x=65535.
> So when this happens, my program would allocate gigabytes of ram. And
> since I used --malloc-fill=, valgrind would then initialize this ram
> (I'm speculating here) causing big time swapping. I found this out by
> disabling swap memory.
To verify that this is the problem, you might use --trace-malloc=yes,
and see the last trace before the thing does not respond anymore.
Note that also that if that is the problem, either ulimit -d
or ulimit -m should give a protection and make the thing fail.

> So either I'm totally wrong and something else is going wrong or it
> might be nice to implement "lazy malloc fill" which initializes pages
> to that value only when a pagefault occurs. Might help overcommit as
> well.
In your case, wouldn't that only hide a (real) bug ?

It would be difficult to implement such a page fault handler at Valgrind
level, from what I understand from Valgrind and linux.

Philippe



------------------------------------------------------------------------------
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
_______________________________________________
Valgrind-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to