Re: [systemd-devel] Limitation on maximum number of systemd timers that can be active

2021-02-04 Thread P.R.Dinesh
Thank you very much Lennart for the help.  I was eager to know whether
there was any known limitation, hence this question.

Hi Andy,
I am currently building a diagnostics data collector that collects various
diagnostics data at different scheduled intervals as configured by the
user.
systemd-timer is used for running the schedules. I need to enforce a limit
on the maximum number of schedules the user can use for this feature.
Currently, I am deciding the limit, hence interested in the maximum value
upto which we can allow the user to configure without creating
much/noticeable performance impact.

I will do a performance testing in raspberry pi 3 and share my observation.

Thank you all for your support

On Wed, Feb 3, 2021 at 9:35 PM Lennart Poettering 
wrote:

> On Mi, 03.02.21 12:16, P.R.Dinesh (pr.din...@gmail.com) wrote:
>
> > Do we have any limitation on the maximum number of systemd timers / units
> > that can be active in the system?
>
> We currently enforce a limit of 128K units. This is controlled by
> the MANAGER_MAX_NAMES define, which is hard compiled in.
>
> > Will it consume high cpu/memory if we configure 1000s of systemd timers?
>
> It will consume a bit of memory, but I'd guess it should scale OK.
>
> All scalability issues regarding number of units we saw many years
> ago, by now all slow paths have been fixed I am aware of. I mean, we
> can certainly still optimize stuff (i.e. "systemctl daemon-reload" is
> expensive), but things to my knowledge having a few K of units should
> be totally Ok. (But then again I don't run things like that myself, my
> knowledge is purely based on feedback, or the recent lack thereof)
>
> Lennart
>
> --
> Lennart Poettering, Berlin
>


-- 
With Kind Regards,
P R Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Limitation on maximum number of systemd timers that can be active

2021-02-02 Thread P.R.Dinesh
Do we have any limitation on the maximum number of systemd timers / units
that can be active in the system?
Will it consume high cpu/memory if we configure 1000s of systemd timers?

-- 
With Kind Regards,
P R Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Configuration option to filter logs sent via systemd-journal-upload

2020-02-27 Thread P.R.Dinesh
I am using systemd-journal-upload and systemd-journal-remote to sync logs
between two systems.
Do we have any configuration to filter the logs that are sent by
systemd-journal-upload?  Mainly I am interested in filtering the logs by
using message-id.

I am using systemd version 243.

Thank you

-- 
With Kind Regards,
P R Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Configuration option to set the ownership of coredump files

2020-02-27 Thread P.R.Dinesh
Do we have a configuration option to change the ownership of core dump
files generated by systemd-coredump?

Mainly I would like to set the ownership group of the coredumps to a custom
group.

I am using systemd version 243.

-- 
With Kind Regards,
P R Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Coredump Truncation problem

2019-08-07 Thread P.R.Dinesh
Hi,

One of my daemon got crashed.
The coredump is generated and stored in compressed format (.xz)
While analyzing the coredump in gdb, I see that the coredump got truncation.
I was not able to get any backtrace out of this coredump.


BFD: warning: /tmp/60515/data-manager is truncated: expected core file size
>= 1034403840, found: 954998784

[Current thread is 1 (LWP 2633)]
(gdb) bt full
#0  0x91b3ded4 in ?? ()
No symbol table info available.
#1  0x8abb1b6c in ?? ()
No symbol table info available.
Backtrace stopped: previous frame identical to this frame (corrupt stack?)



Systemd version : 229.

Size settings:
ProcessSizeMax=4G
ExternalSizeMax=4G
Ulimit is set to unlimited

Initially I thought this could be due to coredump size limit.  But the
ulimit is set to unlimited and ProcessSizeMax/ExternalSizeMax set to 4G.

What other scenarios could lead to truncated coredump?

Is there any known issue on systemd v299 related to coredump truncation?

-- 
With Kind Regards,
P R Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] Option to restart/start a service using sysrq magic key

2018-06-11 Thread P.R.Dinesh
Is it possible to restart/start a service using sysrq magic key.

In my setup we have a getty service which controls the console.  For some
reason this gets stopped.  Once the getty service is stopped the console
connection to the device is hung.  The only way to recover is to reboot the
device.  In such case can we use any sysrq magickey to start or restart a
service or switch to a emergency target?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Journal statistics

2018-02-13 Thread P.R.Dinesh
I am interesting in getting some statistics information from journal, for
eg.,
1) number of logs
2) number of logs per severity
3) number of logs per daemon
4) time when log rotate happen
...

Currently I am generating these data using journalctl and wc (journalctl
 | wc -l)
Is there any other better/native way to get these statistics?

Regards,
Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Enabling Watchdog for Systemd-Journald Service

2018-01-07 Thread P.R.Dinesh
Is it possible to enable watchdog for Systemd-Journald service.  I have
once witnessed a hung situation in systemd-journald (may be due to some
wrong configuration).  I prefer to set the watchdog so that it would be
auto recovered whenever it gets hunged.

Regards,
Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Extracting logs out of journal in windows 7

2017-12-05 Thread P.R.Dinesh
I have a systemd.journal file copied to windows machine.  I want to extract
the logs out of the journal file.  Do we have a equivalent to journalctl in
windows os or is it possible to cross compile journalctl in windows
platform?

Regards,
Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Preventing custom Journal Log Corruption

2017-09-20 Thread P.R.Dinesh
I use a service to extract few interesting logs from journal and store
seperately by using a service.

ExecStart=/lib/systemd/systemd-journal-remote
 --output=/var/log/critical/critical.journal --getter="journalctl -f
PRIORITY=3 -o export"


this service stores the journal file in  /var/log/critical/critical.journal

Now after system reboot, I get the following error message.

systemd-journal-remote[4833]: File /var/log/critical/critical.journal
corrupted or uncleanly shut down, renaming and replacing.


with this, the critical.journal is renamed as
critical@000561f6d57153cf-c55b08dfcf5c7ad7.journal~ and a new
critical.journal file is created.

I found that the critical@000561f6d57153cf-c55b08dfcf5c7ad7.journal~ is
also a valid file, if i use journal --file option, I am able to read logs
from this file.

I would like to know on how to avoid this corruption.
While debugging, I found that if I directly kill the systemd-journal-remote
process, then also I get this corrupted journal.  So I think that while
shutdown this service was killed improperly.

Is there any way I can avoid this corruption

Systemd version : 229
Distro : Custom Distribution
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Store journal logs to different journals(location) based on Filters

2017-09-20 Thread P.R.Dinesh
Is it possible to store journal logs matching specific filters to different
journal files in addition to the main journal using some journald
configurations?


For eg.,  All journal logs of severity critical and above should be stored
in /var/log/critical/critial.journal in addition to the main journal.



Currently I am achieving this via combination of systemd-journal-remote and
journalctl as below

ExecStart=/lib/systemd/systemd-journal-remote
 --output=/var/log/critical/critical.journal --getter="journalctl -f
PRIORITY=3 -o export"

In this approach, I have the following disadvantages
1) For each filter output combination, I need to create additional service
2) Somethings we miss few logs, couldnt root cause it yet.
3) Also I not sure whether this would be a better  compared to the
systemd-journald directly storing them in different journal files.



Hence, I prefer a systemd-journald configuration like following

TargetFilter1="PRIORITY=2"
TargetDestination1="/var/log/critical.journal"

TargetFilter2="_UNIT=CPROCESSOR"
TargetDestination2="/var/log/cprocessor.journal"

which will store those logs matching the filters to the corresponding
destination.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Does Journal conf Max Use configs applies to remote journal as well

2017-07-12 Thread P.R.Dinesh
Does the journal configurations like "SystemMaxUse=, SystemKeepFree=,
SystemMaxFileSize=, SystemMaxFiles=, RuntimeMaxUse=, RuntimeKeepFree=,
RuntimeMaxFileSize=, RuntimeMaxFiles=" applies to the remote journal files
(created by systemd-journal-remote in the /var/log/journal/remote or
/run/log/journal/remote folders ) as well.

If not, how could we size limit or log rotate these remote journal files

Thank you
Regards,
Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Does systemd compression makes use of multi core

2017-01-12 Thread P.R.Dinesh
Hi,

We are using systemd version 229.  Our processor is x86 octa core.
I have configured systemd coredump (coredump.conf) for storage:external and
compression:yes.

One of our daemon consumes lots of ram memory (Around 10Gb).  When it got
crashed, systemd-coredump utility took almost three hours( I have disabled
the resource limitation) to save the compressed coredump.  It is
compressing to xz format.

How could i improve this time taken for compression.  Does systemd coredump
compression utility makes use of the octa core? or does it uses only a
single core?  Is there any other way to handle this case better?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Skipping temporary coredump file during coredump generation

2016-12-03 Thread P.R.Dinesh
Thank you for the insight.
Shall I propose the patch with the following behaviour.

Check if "backtrace is enabled" or "compression is not enabled or not
supported"
Then store the temporary uncompressed file.
If not skip the uncompressed file generation.

if( backtrace_enabled || ( compression_not_enabled ||
compress_not_supported))
{
  generate and store uncompressed coredump
}
else
{
  skip uncompressed coredump file, compress the STDIN COREFILE.
}




On Sat, 3 Dec 2016 at 23:46 Zbigniew Jędrzejewski-Szmek <zbys...@in.waw.pl>
wrote:

> On Fri, Dec 02, 2016 at 05:53:59PM +, P.R.Dinesh wrote:
> > During coredump generation I could find a temporary uncompressed file
> > getting generated from the corefile and written to the harddisk, later
> this
> > file is getting compressed (if compression was enabled) and then the file
> > coredump file is stored and this temporary file is removed.
>
> We write the uncompressed file to disk to generate the backtrace.
> If we aren't doing that, we can just compress on-the-fly.
>
> > I have a process whose memory consumption is typically around 5GB, it
> > generates around 13GB of uncompressed coredump ( coredump_filter = 0x33).
> > Later this file is compressed to 20MB and the uncompressed file is
> removed.
> > I have set the
> > ProcessSizeMax=16GB
> > ExternalSizeMax= 16GB
> > But sometimes my disk doesn't have sufficient space to store this
> temporary
> > file, hence systemd-coredump aborts the coredump processing.
> >
> > Is it possible to skip this temporary file generation and generate the
> > compressed file directly from the Corefile passed through STDIN?
>
> We could be smarter, and if we see that there isn't enough disk space
> to store the uncompressed core file, skip this step, and immediately
> try to write compressed data. I don't know how complicated that would be:
> we write the core either to the file or to the journal, so the code is
> pretty complicated. Patches are welcome ;)
>
> Zbyszek
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Skipping temporary coredump file during coredump generation

2016-12-02 Thread P.R.Dinesh
During coredump generation I could find a temporary uncompressed file
getting generated from the corefile and written to the harddisk, later this
file is getting compressed (if compression was enabled) and then the file
coredump file is stored and this temporary file is removed.

I have a process whose memory consumption is typically around 5GB, it
generates around 13GB of uncompressed coredump ( coredump_filter = 0x33).
Later this file is compressed to 20MB and the uncompressed file is removed.
I have set the
ProcessSizeMax=16GB
ExternalSizeMax= 16GB
But sometimes my disk doesn't have sufficient space to store this temporary
file, hence systemd-coredump aborts the coredump processing.

Is it possible to skip this temporary file generation and generate the
compressed file directly from the Corefile passed through STDIN?

Is there any impact on doing so?
I tried modifying the coredump.c and was able to get this compressed file
directly from the core file.  But I am not sure on whether it will fail on
some scenarios or I am missing some important data.

Also please let me know if my understanding of systemd coredump behaviour
is wrong.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Queries on Journal usage for High Available Systems

2016-09-26 Thread P.R.Dinesh
Hi,
I am working on a High Availability System.

we have two servers in which one will serve the active request whereas the
other remains in standby mode.  When the active server goes down, the
standby server becomes active and starts serving the request.

We want the journals of the active server to be sent to the standby server
and get merged with the standby server's journal.

ie., Standby server's journal contains the logs belongs to the active
server as well as the standby server.

I am currently using systemd version 219

With respect to this requirements, we have the following queries




   1. Is it possible to configure the journald to avoid listening to syslog
   socket.
   2. Can i configure the Journal remote to store the remote logs to the
   same system journal file (journal file used by systemd-journal).  Will it
   result in any file corruption
   3. In case of journal-upload, can I filter based on severity,
   message-id, source (syslog, printf, kernel,etc.,)
   4. I was trying to merge two journal files belonging to two different
   systems using "journalctl --merge", my expectation is the logs should be
   interleaved based on the timestamp, instead the new system logs are just
   append to the end of the existing log.
   5. In case of system journal, currently we are having option to store in
   different journal files for different users.  Is it also possible to store
   in different journal files based on different sources of log(syslog,
   printf, kernel,etc.,)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Error : cannot fork for core dump: Cannot allocate memory

2016-07-30 Thread P.R.Dinesh
My system was running for 2 days continuously and suddenly I started
getting the following error

Broadcast message from systemd-journald@Acce-01--5712 (Thu 2016-07-28
> 19:02:14 UTC):
>  systemd[1]: Caught , cannot fork for core dump: Cannot allocate
> memory
>  Message from syslogd@Acce-01--5712 at Jul 28 19:02:14 ...
>  systemd[1]: Caught , cannot fork for core dump: Cannot allocate
> memory
>  Message from syslogd@Acce-01--5712 at Jul 28 19:02:14 ...
>  systemd[1]: Freezing execution.
>  Broadcast message from systemd-journald@Acce-01--5712 (Thu 2016-07-28
> 19:02:14 UTC):
>  systemd[1]: Freezing execution.


My guess is that the system ran out of memory and systemd is not able to
fork new process (systemd-coredump) to perform the coredump operation.

Is my understanding correct?

Also how should we handle this situation?  what is the expected behavior of
systemd-coredump when the system ran out of resource.

Once i hit this issue, the system is frozen,  no console access, so not
able to get further information.


I am using systemd version 219.

-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can coredumpctl work independent of journald?

2016-05-13 Thread P.R.Dinesh
Thank you Lennart,

This sounds as an interesting proposal.

I am interested in the below enhancement which would give us fine control
on what is stored persistantly
"(along with this feature we should probably also add support for
storing low-priority messages in /run only, so that debug stuff is
kept around only during runtime, but not after)"


I am proposing for a user configuration parameter where he can specify a
minimum severity level for storing persistently.  Any logs below that
severity would just stay in runtime only.

I seek your advice and request you to provide me some pointers on designing
this feature.


On Thu, May 12, 2016 at 11:21 PM, Lennart Poettering <lenn...@poettering.net
> wrote:

> On Thu, 12.05.16 08:15, P.R.Dinesh (pr.din...@gmail.com) wrote:
>
> > Thank you Lennart,
> > I would like to explain my system scenario.
> >
> > We are using systemd version 219. (Updating to 229 is in progress).
> >
> > Configured for persistent storage for both Journal and Coredump (Coredump
> > is stored externallly)
> >
> > The logs and coredump are stored in another partition
> > "/var/diagnostics/logs" and "/var/diagnostics/coredump".  We do symbolic
> > link between the /var/lib/systemd/coredump and logs to those folders.
> >
> > Coredump and journald is configured to utilized upto 10% of the disk
> > space(total disk space is ~400MB) which would allocate 40MB to journal
> logs
> > and 40MB to coredump.  For some reason (under investigation) some of our
> > daemons are generating too much logs which makes the journald to reach
> the
> > 40MB limit within 6 hours.  Hence journald starts wrapping around.
> > Meanwhile some daemons have also crashed and core dumped.
> >
> > Now when I do coredumplist, none of those coredumps are shown.
>
> I see.
>
> > Also I tried launching the coredumpctl with the coredump file both using
> >  the pid name as well as using the coredump file name.  Since we dont
> have
> > the journal entry coredumpctl is not launching them,  can we atleast have
> > the coredumpctl launch the gdb using the core dump file name?
>
> The coredumps are simply compressed, use the xz tools to decompress
> them. Note however, that the xz file format generated by the xz
> libraries was incomptible with the xz tool for a long time, which is
> why the xz support in the journal and the coredumping code was
> experimental until v229. Make sure to upgrade to v229 if you want to
> decompress the coredumps with the normal "unxz".
>
> >
> > [prd@localhost ~]$ coredumpctl gdb
> > core.lshw.1000.9bb41758bba94306b39e751048e0cee9.23993.146287152300.xz
> > No match found.
> > [prd@localhost ~]$ coredumpctl gdb 23993
> > No match found.
>
> If you have v229 this should work:
>
> unxz <
> /var/lib/systemd/coredump/core.lshw.1000.9bb41758bba94306b39e751048e0cee9.23993.146287152300.xz
> > ~/coredump
> gdb ~/coredump
> …
>
> > In summary, the frequency of logs are higher and the frequency of core
> > dumps are very less in our system which leads to the loss of coredump
> > information.
> >
> > I am thinking of two solutions here
> > 1) Enhance coredumpctl to launch the gdb using the coredump file name
> > 2) Store the Journal logs for coredump seperately from other journal logs
> > so that they could be maintained for long duration (is this
> > feasible?)
>
> A feature that has been requested before is that we add
> priority-sensitive rotation to the journal: keep error messages with
> levels of ERROR or higher around for longer than INFO and lower or
> so. Given that the coredumps are logged at CRIT level, this would
> cover your case nicely.
>
> However, that's a feature request only, and I think it would make
> sense to have, but nobody hacked that up yet.
>
> (along with this feature we should probably also add support for
> storing low-priority messages in /run only, so that debug stuff is
> kept around only during runtime, but not after)
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
>



-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Internals of journald

2016-05-12 Thread P.R.Dinesh
Thank you for the link.  But I looking for journald internals as a
developer, like how journald is implemented, its design doc etc.,

On Thu, May 12, 2016 at 6:39 PM, Mikhail Kasimov <mikhail.kasi...@gmail.com>
wrote:

> Hello!
>
> Try this one:
> https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs
>
> 12.05.2016 15:19, P.R.Dinesh пишет:
>
> I would like to understand the internals of journald, how does journal
> works,  could you please share some links on this subject.
> Thank you
> Regards
> Dinesh
>
>
> ___
> systemd-devel mailing 
> listsystemd-devel@lists.freedesktop.orghttps://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
>
>


-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Internals of journald

2016-05-12 Thread P.R.Dinesh
I would like to understand the internals of journald, how does journal
works,  could you please share some links on this subject.
Thank you
Regards
Dinesh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can coredumpctl work independent of journald?

2016-05-11 Thread P.R.Dinesh
Additionally can we improve the journal log rotate system to keep the
higher severity messages (like coredump) for longer duration and remove
lower priority messages to get spaces.

On Thu, May 12, 2016 at 8:15 AM, P.R.Dinesh <pr.din...@gmail.com> wrote:

> Thank you Lennart,
> I would like to explain my system scenario.
>
> We are using systemd version 219. (Updating to 229 is in progress).
>
> Configured for persistent storage for both Journal and Coredump (Coredump
> is stored externallly)
>
> The logs and coredump are stored in another partition
> "/var/diagnostics/logs" and "/var/diagnostics/coredump".  We do symbolic
> link between the /var/lib/systemd/coredump and logs to those folders.
>
> Coredump and journald is configured to utilized upto 10% of the disk
> space(total disk space is ~400MB) which would allocate 40MB to journal logs
> and 40MB to coredump.  For some reason (under investigation) some of our
> daemons are generating too much logs which makes the journald to reach the
> 40MB limit within 6 hours.  Hence journald starts wrapping around.
> Meanwhile some daemons have also crashed and core dumped.
>
> Now when I do coredumplist, none of those coredumps are shown.
>
> Also I tried launching the coredumpctl with the coredump file both using
>  the pid name as well as using the coredump file name.  Since we dont have
> the journal entry coredumpctl is not launching them,  can we atleast have
> the coredumpctl launch the gdb using the core dump file name?
>
> [prd@localhost ~]$ coredumpctl gdb
> core.lshw.1000.9bb41758bba94306b39e751048e0cee9.23993.146287152300.xz
> No match found.
> [prd@localhost ~]$ coredumpctl gdb 23993
> No match found.
>
>
> In summary, the frequency of logs are higher and the frequency of core
> dumps are very less in our system which leads to the loss of coredump
> information.
>
> I am thinking of two solutions here
> 1) Enhance coredumpctl to launch the gdb using the coredump file name
> 2) Store the Journal logs for coredump seperately from other journal logs
> so that they could be maintained for long duration (is this feasible?)
>
> Thank you
> Regards,
> Dinesh
>
> On Wed, May 11, 2016 at 10:25 PM, Lennart Poettering <
> lenn...@poettering.net> wrote:
>
>> On Wed, 11.05.16 20:31, P.R.Dinesh (pr.din...@gmail.com) wrote:
>>
>> > I have set the journald to be persistant and limit its size to 40MB.
>> > I had a process coredumped and the coredump file is found in
>> > /var/log/systemd/coredump
>> >
>> > When I run coredumpctl this coredump is not shown.
>> >
>> > Later I found that the core dump log is missing from the Journal ( the
>> > journal got wrapped since it reached the size limitation).
>> >
>> > I think coredumpctl depends on journal to display the coredump.  Can't
>> it
>> > search for the coredump files present in the coredump folder and list
>> those
>> > files?
>>
>> We use the metadata and the filtering the journal provides us with,
>> and the coredump on disk is really just secondary, external data to
>> that, that can be lifecycled quicker than the logging data. We extract
>> the backtrace from the coredump at the momemt the coredump happens,
>> and all that along with numerous metadata fields is stored in the
>> journal. In fact storing the coredump is optional, because in many
>> setups the short backtrace in the logs is good enough, and the
>> coredump is less important.
>>
>> So, generally the concept here really is that logs are cheap, and thus
>> you keep around more of them; and coredumps are large and thus you
>> lifecycle them quicker. If I understand correctly what you want is the
>> opposite: you want a quicker lifecycle for the logs but keep the
>> coredumps around for longer. I must say, I am not entirely sure where
>> such a setup would be a good idea though... i.e. wanting persistent
>> coredumps but volatile logging sounds a strange combination to
>> me... Can you make a good case for this?
>>
>> But yeah, we really don't cover what you are asking for right now, and
>> I am not sure we should...
>>
>> > Also can I launch the coredumpctl gdb by providing a compressed core
>> > file.
>>
>> If you configure systemd-coredump to store the coredumps compressed
>> (which is in fact the default), then "coredumpctl gdb" will implicitly
>> decompress them so that gdb can do its work.
>>
>> Lennart
>>
>> --
>> Lennart Poettering, Red Hat
>>
>
>
>
> --
> With Kind Regards,
> Dinesh P Ramakrishnan
>



-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can coredumpctl work independent of journald?

2016-05-11 Thread P.R.Dinesh
Thank you Lennart,
I would like to explain my system scenario.

We are using systemd version 219. (Updating to 229 is in progress).

Configured for persistent storage for both Journal and Coredump (Coredump
is stored externallly)

The logs and coredump are stored in another partition
"/var/diagnostics/logs" and "/var/diagnostics/coredump".  We do symbolic
link between the /var/lib/systemd/coredump and logs to those folders.

Coredump and journald is configured to utilized upto 10% of the disk
space(total disk space is ~400MB) which would allocate 40MB to journal logs
and 40MB to coredump.  For some reason (under investigation) some of our
daemons are generating too much logs which makes the journald to reach the
40MB limit within 6 hours.  Hence journald starts wrapping around.
Meanwhile some daemons have also crashed and core dumped.

Now when I do coredumplist, none of those coredumps are shown.

Also I tried launching the coredumpctl with the coredump file both using
 the pid name as well as using the coredump file name.  Since we dont have
the journal entry coredumpctl is not launching them,  can we atleast have
the coredumpctl launch the gdb using the core dump file name?

[prd@localhost ~]$ coredumpctl gdb
core.lshw.1000.9bb41758bba94306b39e751048e0cee9.23993.146287152300.xz
No match found.
[prd@localhost ~]$ coredumpctl gdb 23993
No match found.


In summary, the frequency of logs are higher and the frequency of core
dumps are very less in our system which leads to the loss of coredump
information.

I am thinking of two solutions here
1) Enhance coredumpctl to launch the gdb using the coredump file name
2) Store the Journal logs for coredump seperately from other journal logs
so that they could be maintained for long duration (is this feasible?)

Thank you
Regards,
Dinesh

On Wed, May 11, 2016 at 10:25 PM, Lennart Poettering <lenn...@poettering.net
> wrote:

> On Wed, 11.05.16 20:31, P.R.Dinesh (pr.din...@gmail.com) wrote:
>
> > I have set the journald to be persistant and limit its size to 40MB.
> > I had a process coredumped and the coredump file is found in
> > /var/log/systemd/coredump
> >
> > When I run coredumpctl this coredump is not shown.
> >
> > Later I found that the core dump log is missing from the Journal ( the
> > journal got wrapped since it reached the size limitation).
> >
> > I think coredumpctl depends on journal to display the coredump.  Can't it
> > search for the coredump files present in the coredump folder and list
> those
> > files?
>
> We use the metadata and the filtering the journal provides us with,
> and the coredump on disk is really just secondary, external data to
> that, that can be lifecycled quicker than the logging data. We extract
> the backtrace from the coredump at the momemt the coredump happens,
> and all that along with numerous metadata fields is stored in the
> journal. In fact storing the coredump is optional, because in many
> setups the short backtrace in the logs is good enough, and the
> coredump is less important.
>
> So, generally the concept here really is that logs are cheap, and thus
> you keep around more of them; and coredumps are large and thus you
> lifecycle them quicker. If I understand correctly what you want is the
> opposite: you want a quicker lifecycle for the logs but keep the
> coredumps around for longer. I must say, I am not entirely sure where
> such a setup would be a good idea though... i.e. wanting persistent
> coredumps but volatile logging sounds a strange combination to
> me... Can you make a good case for this?
>
> But yeah, we really don't cover what you are asking for right now, and
> I am not sure we should...
>
> > Also can I launch the coredumpctl gdb by providing a compressed core
> > file.
>
> If you configure systemd-coredump to store the coredumps compressed
> (which is in fact the default), then "coredumpctl gdb" will implicitly
> decompress them so that gdb can do its work.
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
>



-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Can coredumpctl work independent of journald?

2016-05-11 Thread P.R.Dinesh
I have set the journald to be persistant and limit its size to 40MB.
I had a process coredumped and the coredump file is found in
/var/log/systemd/coredump

When I run coredumpctl this coredump is not shown.

Later I found that the core dump log is missing from the Journal ( the
journal got wrapped since it reached the size limitation).

I think coredumpctl depends on journal to display the coredump.  Can't it
search for the coredump files present in the coredump folder and list those
files?

Also can I launch the coredumpctl gdb by providing a compressed core file.

Thank you
-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Callback or Event Notification from journald in case of crash

2016-04-05 Thread P.R.Dinesh
I am using systemd, journald and systemd-coredump in my system.  When a
process crashes the systemd-coredump util handles the core dump and logs a
message in the journal with MESSAGE_ID=fc2e22bc6ee647b6b90729ab34a250b1

Now I want to add a routine which will do post crash activities, For eg.,
blinking a fault led in the system.

I think we can poll the journal for the crash message, but is there any
other optimal way to register for a callback from journal which will be
triggered whenever a message of particular MESSAGE_ID is received.


-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Global Configuration to Restart all Daemons on Failure/Crash

2015-11-30 Thread P.R.Dinesh
I want to restart any of my daemons (controlled by systemd) on
Failure/Crash.

I can do this individually by setting the Restart=on-failure in the daemon
specific service file.   But I want a global configuration by which we can
instruct the systemd to restart any service when it crashes.

Could you please share me on how to achieve this feature.

Thank you

On Mon, Nov 30, 2015 at 6:37 PM, P.R.Dinesh <pr.din...@gmail.com> wrote:

>
>
> I want to restart any of my daemons (controlled by systemd) on
> Failure/Crash.
>
> I can do this individually by setting the Restart=on-failure in the daemon
> specific service file.   But I want a global configuration by which we can
> instruct the systemd to restart any service when it crashes.
>
> Could you please share me on how to achieve this feature.
>
> Thank you
>
> --
> With Kind Regards,
> Dinesh P Ramakrishnan
>



-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel