Re: [systemd-devel] Problems with systemd-coredump

2014-03-01 Thread Manuel Reimer

On 02/18/2014 11:05 AM, Thomas Bächler wrote:

Am 17.02.2014 21:27, schrieb Manuel Reimer:

As soon as a bigger coredump (about 500 MB) is to be stored, the whole
system slows down significantly. Seems like storing such big amounts of
data takes pretty long and is a very CPU hungry process...


I completely agree. Since the kernel ignores the maximum coredump size
when core_pattern is used, a significant amount of time passes whenever
a larger process crashes, with no benefit (since the dump never gets
saved anywhere).

This is extremely annoying if processes with sizes in the tens or
hundreds of gigabytes crash, which sadly happened to me quite a few
times recently.


If this feature is broken by design, why is it still enabled by default 
on Arch Linux? systemd-coredump makes it nearly impossible to debug 
bigger processes and it took me quite some time to figure out how to get 
coredumps placed to /var/tmp so I can use them to find out why my 
process has crashed.


Yours

Manuel

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Problems with systemd-coredump

2014-02-18 Thread Thomas Bächler
Am 17.02.2014 21:27, schrieb Manuel Reimer:
 As soon as a bigger coredump (about 500 MB) is to be stored, the whole
 system slows down significantly. Seems like storing such big amounts of
 data takes pretty long and is a very CPU hungry process...

I completely agree. Since the kernel ignores the maximum coredump size
when core_pattern is used, a significant amount of time passes whenever
a larger process crashes, with no benefit (since the dump never gets
saved anywhere).

This is extremely annoying if processes with sizes in the tens or
hundreds of gigabytes crash, which sadly happened to me quite a few
times recently.




signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Problems with systemd-coredump

2014-02-18 Thread Kay Sievers
On Tue, Feb 18, 2014 at 11:05 AM, Thomas Bächler tho...@archlinux.org wrote:
 Am 17.02.2014 21:27, schrieb Manuel Reimer:
 As soon as a bigger coredump (about 500 MB) is to be stored, the whole
 system slows down significantly. Seems like storing such big amounts of
 data takes pretty long and is a very CPU hungry process...

 I completely agree. Since the kernel ignores the maximum coredump size
 when core_pattern is used, a significant amount of time passes whenever
 a larger process crashes, with no benefit (since the dump never gets
 saved anywhere).

 This is extremely annoying if processes with sizes in the tens or
 hundreds of gigabytes crash, which sadly happened to me quite a few
 times recently.

It's an incomplete and rather fragile solution the way it works today.
We cannot really *malloc()* the memory for a core dump, it's *pipe*
from the kernel for a reason. It can be as large as the available RAM,
that's why it's limited to the current maximum size, and therefore
also limited in its usefulness.

It really always needs to be extracted to be a minidump to store away.
There are no other sensible options when things should end up in the
journal.

Kay
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Problems with systemd-coredump

2014-02-17 Thread Manuel Reimer

Hello,

if a bigger application crashes with coredump, then systemd-coredump 
seems to have a few problems with that.


At first, there is the 767 MB limitation which just drops all bigger 
coredumps.


But even below this limit it seems to be impossible to store coredumps. 
I did a few tries and found out that, with default configuration, the 
limit seems to be at about 130 MB. Bigger coredumps are just dropped and 
I cannot find any errors logged to somewhere.


It seems to be possible to work around this problem by increasing 
SystemMaxFileSize to 1000M. With this configuration change, bigger 
coredumps seem to be possible, but this causes another problem.


As soon as a bigger coredump (about 500 MB) is to be stored, the whole 
system slows down significantly. Seems like storing such big amounts of 
data takes pretty long and is a very CPU hungry process...


Can someone please give some informations on this? Maybe it's a bad idea 
to store such big amounts of data in the journal? If so, what's the 
solution? Will journald get improvements in this area?


Thank you very much in advance.

Greetings,

Manuel

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Problems with systemd-coredump

2014-02-17 Thread Jan Alexander Steffens
On Mon, Feb 17, 2014 at 9:27 PM, Manuel Reimer
manuel.s...@nurfuerspam.de wrote:
 Hello,

 if a bigger application crashes with coredump, then systemd-coredump seems
 to have a few problems with that.

 At first, there is the 767 MB limitation which just drops all bigger
 coredumps.

 But even below this limit it seems to be impossible to store coredumps. I
 did a few tries and found out that, with default configuration, the limit
 seems to be at about 130 MB. Bigger coredumps are just dropped and I cannot
 find any errors logged to somewhere.

 It seems to be possible to work around this problem by increasing
 SystemMaxFileSize to 1000M. With this configuration change, bigger coredumps
 seem to be possible, but this causes another problem.

 As soon as a bigger coredump (about 500 MB) is to be stored, the whole
 system slows down significantly. Seems like storing such big amounts of data
 takes pretty long and is a very CPU hungry process...

 Can someone please give some informations on this? Maybe it's a bad idea to
 store such big amounts of data in the journal? If so, what's the solution?
 Will journald get improvements in this area?

 Thank you very much in advance.

 Greetings,

 Manuel

I wish there was a good way to install a system debugger which could
inspect the process and its memory at the time of the crash and
generate a short textual report, like libSegFault, or a minidump, like
breakpad. Either hopefully small enough to just chuck into the
journal.

core_pattern requires funneling the entire process memory through a
pipe and making a copy of it. LD_PRELOAD seems terribly brittle and
doesn't work on statically linked binaries.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Problems with systemd-coredump

2014-02-17 Thread Kay Sievers
On Mon, Feb 17, 2014 at 9:43 PM, Jan Alexander Steffens
jan.steff...@gmail.com wrote:
 On Mon, Feb 17, 2014 at 9:27 PM, Manuel Reimer
 manuel.s...@nurfuerspam.de wrote:
 Hello,

 if a bigger application crashes with coredump, then systemd-coredump seems
 to have a few problems with that.

 At first, there is the 767 MB limitation which just drops all bigger
 coredumps.

 But even below this limit it seems to be impossible to store coredumps. I
 did a few tries and found out that, with default configuration, the limit
 seems to be at about 130 MB. Bigger coredumps are just dropped and I cannot
 find any errors logged to somewhere.

 It seems to be possible to work around this problem by increasing
 SystemMaxFileSize to 1000M. With this configuration change, bigger coredumps
 seem to be possible, but this causes another problem.

 As soon as a bigger coredump (about 500 MB) is to be stored, the whole
 system slows down significantly. Seems like storing such big amounts of data
 takes pretty long and is a very CPU hungry process...

 Can someone please give some informations on this? Maybe it's a bad idea to
 store such big amounts of data in the journal? If so, what's the solution?
 Will journald get improvements in this area?

 I wish there was a good way to install a system debugger which could
 inspect the process and its memory at the time of the crash and
 generate a short textual report, like libSegFault, or a minidump, like
 breakpad. Either hopefully small enough to just chuck into the
 journal.

That is the plan, and someone just needs to finish it:
  http://cgit.freedesktop.org/libminidump

Kay
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel