labath added a comment.

In https://reviews.llvm.org/D26676#598559, @hhellyer wrote:

> I haven't solved that yet!  ;-)
>
> I'm currently ending up with cores of ~500kb which is probably too big. I'm 
> seeing what I can do to bring them down but it might be that I can't shrink 
> them that much. I'm attempting to write two tests, one for multiple threads 
> and one for cores generated by gcore but the actual C program for both is 
> very similar.
>
> What would you say a sensible target size actually is?


I would say about 100k is OK, but of course, the less, the better.

> The other option is to run these tests just on Linux then it might be 
> possible to create the core dumps as part of the test. There's a lot of 
> drawbacks to that - not least just that they wouldn't be run every where but 
> also that depending on how the OS is setup it's not always possible to create 
> a core on a signal or attach gcore. I don't really want to add flakey tests.

Yeah, we can't rely on linux kernel to generate the core files, as there are 
many ways that could be disabled. If we had a core file writer in LLDB (I don't 
know if we do, or if it works), we could possibly do that.

That said, I was able to trivially generate a ~20k core file by setting 
/proc/$PID/coredump_filter to 0. You won't get any memory regions that way, but 
it does not sound like you need any for this test anyway. It could probably be 
even improved by some tricks like static linking (to avoid having a long list 
of modules, etc). Have you tried that?


https://reviews.llvm.org/D26676



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to