Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-20 Thread Adam Spragg
On Tuesday 20 Nov 2012 01:21:54 Lennart Poettering wrote:
 My intention was to speak only HTTP for all of this, so that we can
 nicely work through firewalls.

Wait, I thought one of the guiding principles of systemd was to do things The 
Right Way, and not use ugly workarounds for other people's brokenness.

If admins want to send network traffic over a port, and their firewall is 
preventing them, surely the problem is in the firewall, and the firewall 
should be fixed? Making everything HTTP-friendly to get around broken firewall 
policies is an ugly workaround which just helps perpetuate the problem.

Not to mention the fact that HTTP is a horrible protocol for almost anything 
except serving up web pages. It's effectively implements a basic 
request/response datagram protocol (albeit with arbitrarily large packets), 
which can only be initiated from one side, but with the overhead of HTTP 
headers and the creation of a TCP connection.


Just my ¤0.02

Adam
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-20 Thread Jóhann B. Guðmundsson

On 11/20/2012 09:02 AM, Adam Spragg wrote:

On Tuesday 20 Nov 2012 01:21:54 Lennart Poettering wrote:

My intention was to speak only HTTP for all of this, so that we can
nicely work through firewalls.

Wait, I thought one of the guiding principles of systemd was to do things The
Right Way, and not use ugly workarounds for other people's brokenness.

If admins want to send network traffic over a port, and their firewall is
preventing them, surely the problem is in the firewall, and the firewall
should be fixed? Making everything HTTP-friendly to get around broken firewall
policies is an ugly workaround which just helps perpetuate the problem.


Agreed + you dont want to use ssh to do this ether



Not to mention the fact that HTTP is a horrible protocol for almost anything
except serving up web pages. It's effectively implements a basic
request/response datagram protocol (albeit with arbitrarily large packets),
which can only be initiated from one side, but with the overhead of HTTP
headers and the creation of a TCP connection.



Agreed

I somehow always imagined remote systemd and systemd journal integration 
being handle in similar manner as func [1] and certmaster[2] are doing.


1. https://fedorahosted.org/func/
2. https://fedorahosted.org/certmaster/

JBG

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-20 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Nov 20, 2012 at 10:02:39AM +, Jóhann B. Guðmundsson wrote:
 On 11/20/2012 09:02 AM, Adam Spragg wrote:
 On Tuesday 20 Nov 2012 01:21:54 Lennart Poettering wrote:
 My intention was to speak only HTTP for all of this, so that we can
 nicely work through firewalls.
 Wait, I thought one of the guiding principles of systemd was to do things The
 Right Way, and not use ugly workarounds for other people's brokenness.
 
 If admins want to send network traffic over a port, and their firewall is
 preventing them, surely the problem is in the firewall, and the firewall
 should be fixed? Making everything HTTP-friendly to get around broken 
 firewall
 policies is an ugly workaround which just helps perpetuate the problem.
 
 Agreed + you dont want to use ssh to do this ether
I think that firewalls are just one of the reasons... I think that we
want to have SSL-encrypted communciations by default, and then the
specific protocol used above that is invisible to the firewall anyway.

Having multiple transports isn't really a problem -- it is mostly a matter
of hooking into some library.

HTTP is already spoken by systemd-journal-gatewayd, and SSH is useful
because everybody already has it set up.

 Not to mention the fact that HTTP is a horrible protocol for almost anything
 except serving up web pages. It's effectively implements a basic
 request/response datagram protocol (albeit with arbitrarily large packets),
 which can only be initiated from one side, but with the overhead of HTTP
 headers and the creation of a TCP connection.
If encryption is used, TCP connection overhead is negligible. And we
only want mostly one-way communication anyway.

 I somehow always imagined remote systemd and systemd journal
 integration being handle in similar manner as func [1] and
 certmaster[2] are doing.
 
 1. https://fedorahosted.org/func/
 2. https://fedorahosted.org/certmaster/
Certmaster looks great: maybe it can be used to solve the problem of
certificate distribution.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-20 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Nov 20, 2012 at 03:35:30AM +0100, Zbigniew Jędrzejewski-Szmek wrote:
 I guess that writing a man-page is in order...
So, to make things concrete, I've put together a wish-list manpage,
which describe some things which are there and quite a few things
which are not there yet. If this is accepted, then I'll start to fix
the code to follow the docs.

I'm pushing this to git://in.waw.pl/git/systemd branch remote.

Zbyszek

---
SYSTEMD-JOURNAL-RE(8)   systemd-journal-remote   SYSTEMD-JOURNAL-RE(8)



NAME
   systemd-journal-remote, systemd-journal-remote.service, systemd-
   journal-remote.socket - Stream journal messages over the network

SYNOPSIS
   systemd-journal-remote [OPTIONS...] [-o/--output=DIR|FILE] [SOURCES...]

   systemd-journal-remote.service

   systemd-journal-remote.socket

DESCRIPTION
   systemd-journal-remote is a command to receive journal events and store
   them to the journal. Input streams must be in the Journal Export
   Format[1], i.e. like the output from journalctl --output=export.

SOURCES
   Sources can be either active (systemd-journal-remote requests and
   pulls the data), or passive (systemd-journal-remote waits for a
   connection and than receives events pushed by the other side).

   systemd-journal-remote can read more than one event stream at a time.
   They will be interleaved in the output file. In case of active
   connections, each source is one stream, and in case of passive
   connections each connection can result in a separate stream. Sockets
   can be configured in accept mode (i.e. only one connection), or
   listen mode (i.e. multiple connections, each resulting in a stream).

   When there are no more connections, and no more can be created (there
   are no listening sockets), then systemd-journal-remote will exit.

   Active sources can be specified in the following ways:

   When - is given as an argument, events will be read from standard
   input.

   When an URL is given systemd-journal-remote will retrieve messages
   over HTTP or HTTPS. The URL should refer to the root of a remote
   systemd-journal-gatewayd(8) instance (e.g.
   http://some.host:19531/).

   If the URL starts with ssh:// an ssh(1) connection will be opened
   and journalctl(1) will be launched on the remote host (e.g.
   ssh://u...@some.host). Messages will be sent over the encrypted
   connection and stored locally.

   If a file path is given, journal events will be read from local
   disk. If the path refers to an exisiting file, just this file will
   be read. If the path refers to an exisiting directory, journal
   files underneath this directory will be read (like with journalctl
   --directory=).

   Passive sources can be specified in the following ways:

   --listen=ADDRESS
   ADDRESS must be an address suitable for ListenStream= (c.f.
   systemd.socket(5)). A stream of journal events in expected.

   --listen-http=ADDRESS
   ADDRESS must be an address suitable for ListenStream= (c.f.
   systemd.socket(5)). An HTTP POST request is expected to /events.

   --listen-https=ADDRESS
   ADDRESS must be an address suitable for ListenStream= (c.f.
   systemd.socket(5)). An HTTPS POST request is expected to /events.

   $LISTEN_FDS
   When systemd-journal-remote is started as a service
   (systemd-journal-remote.service unit) sockets configured in
   systemd-journal-remote.socket will be passed using $LISTEN_FDS.

   By default, open sockets passed through socket activation behave
   like those opened with --listen= described above. If
   --listen-http=-n or --listen-https=-n is used, HTTP and HTTPS
   connections will be expected like with the options --listen-http=
   and --listen-https= above. Integer n refers to the n-th socket of
   $LISTEN_FDS, and must be in the range 0 ..  $LISTEN_FDS-1.

SINKS
   The location of the output journal can be specified with -o or
   --output=.

   --output=FILE
   Will write to this journal. The filename must end with .journal.
   The file will be created if it does not exist. When necessary
   (journal file full, or corrupted) the file will be renamed
   following normal journald rules and a new journal file will be
   created in it's stead.

   --output=DIR
   Will create journal files underneath directory DIR. The directory
   must exist. When necessary (journal files full, or corrupted)
   journal files will be renamed following normal journald rules.
   Names of files underneath DIR will be generated using the rules
   described 

Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-20 Thread Lennart Poettering
On Tue, 20.11.12 03:35, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:

  My intention was to speak only HTTP for all of this, so that we can
  nicely work through firewalls.
 Yeah, probably that's more useful than raw stream for normal purposes,
 since it allows for authentication and whatnot.

Yeah, and not just that. I also want to beef up the server side so that
it optionally can run as CGI and as fastCGI, so that people can
integrate that into their existing web servers, if they wish.

But yeah, using HTTP solves many many issues, such as auth, encryption,
firewall/proxy support, and so on. On top of this the semantics of log
syncing fit really nicely into the GET/POST model of HTTP.

  I think it would make sense to drop things into
  /var/log/journal/hostname/*.journal by default. The hostname would
  have to be determined from the URL the user specified on the command
  line. Ideally we'd use the machine ID here, but since the machine ID is
  hardly something the user should specify on the command line (and we
  cannot just take the machine ID supplied form the other side, because we
  probably should not trust that and hence allow it to tell us to
  overwrite another hosts' data), the hostname is the next best
  thing. Currently libsystemd-journald will ignore directories that are
  not machine IDs when browsing, but we could easily drop that limitation.
 So it seems that this mapping (url/source/whatever - .journal path)
 will require some thought.
 
 I'd imagine, that people will want to use this most often as a syslogd
 replacement, i.e. launch systemd-journal-remote on a central host, and
 then let all other hosts stream messages live. In this case we know
 only two things: _MACHINE_ID specified remotely, and the remote
 IP:PORT and thus hostname. Actually, I thought that since all those
 things are unreliable (IP only to some extent, but still), they
 wouldn't be used to determine the output file, and all output would go
 into one .journal.

So, my thinking here is that hostnames generally suck for identifying
machines since they are not unique, can change and sometimes are not set
at all. However, that is only true in the general case. In the specific
case where admins want to set up an infrastructure for centralizing logs
they first set up a network, and as part of that I am pretty sure they
came up with a sane naming/addressing scheme first, that makes the name
unique in their local setup, makes the names fixed and ensures the name
is always there. Or to put this in other words: to be able to sync logs
from another hosts you first need to think about how you can contact
that other host, and hence had to introduce a naming scheme first, and
we should be able to just build on that.

 I remember that samba does (did?) something like what you suggest, and
 kept separate logs based on the information under control of the
 connecting host. On a host connected to the internet this would lead
 to hundreds of log files.
 
 In addition, .journal files have a fairly big overhead: ~180kB for a
 an empty file. This overhead might be unwanted if there are many
 sources.
 
 Maybe there's no one answer, and choices will have to be provided.

I think it definitely makes sense to allow admins to name the local
destination dir as they want. I am mostly just interested in finding a
good default, and I'd vote extracting the basename of the URL used to
access the remote journal for that.

   Push mode is not implemented... (but it would be a separate program
   anyway).
  
  My intention was actually to keep this in the same tool. So that we'd
  have for input and output:
  
  A) HTTP GET
  B) HTTP POST
  C) SSH PULL (would invoke journalctl -o export via ssh)
  D) SSH PUSH (would invoke systemd-journald-remote via ssh)
  E) A directory for direct read access (which would allows us to merge 
  multiplefile into one with this tool)
  F) A directory for direct write access (which is of course the
  default)

 Also useful:
 B1) socket listen() without HTTP

Where would I want to use that instead of B? 

 B2) HTTPS POST (I'm assuming that POST means to listen)

HTTPS for me is just a special case of HTTP. When I meant HTTP above I
meant HTTP with and without TLS, and with and without authentication.

 E1) a specific file for read access
 F1) a specific file for write access

That's something we have to think about anyway: i.e. whether we should
allow accessing a separate journal file via libsystemd-journal?
Currently we only allow accessing dirs. The reason for that is more or
less that accessing files probably doesn't do what people assume it
would do, since files are subject to rotation and referencing a file
hence quickly becomes a dangling reference...

 B1, F, F1 are implemented; A is implemented but ugly (curl).
 E and E1 would require pulling in journalctl functionality.
 
  We should always require that either E or F is used, but in any
  combination with any of the others.
 I think it is useful to 

Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-19 Thread Thomas Bächler
Am 19.11.2012 01:21, schrieb Zbigniew Jędrzejewski-Szmek:
 They are parsed and stored
 into a journal file. The journal file is /var/log/journal/external-*.journal
 by default, but this can be overridden by commandline options (--output).

What about /var/log/$MACHINE_ID/, isn't it the right place for these?




signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-19 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Nov 19, 2012 at 11:14:25AM +0100, Thomas Bächler wrote:
 Am 19.11.2012 01:21, schrieb Zbigniew Jędrzejewski-Szmek:
  They are parsed and stored
  into a journal file. The journal file is /var/log/journal/external-*.journal
  by default, but this can be overridden by commandline options (--output).
 
 What about /var/log/$MACHINE_ID/, isn't it the right place for these?
Yes, I mis-wrote: actually they go into REMOTE_JOURNAL_PATH
#define REMOTE_JOURNAL_PATH /var/log/journal/ SD_ID128_FORMAT_STR 
/remote-%s.journal
where SD_ID128_FORMAT_STR is of course $MACHINE_ID, and %s get the variable
part dependent on the source socket.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-19 Thread Lennart Poettering
On Mon, 19.11.12 01:21, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:

Heya,

I like your work!

 The program (called systemd-journal-remoted now, but I'd be happy to
 hear suggestions for a better name) listens on sockets (either from

Since this is also useful when run on the command line I'd really prefer
to drop the d suffix, i.e. systemd-journal-remote sounds like a good
name for it.

 socket activation, or specified on the command line with --listen=),
 or reads stdin (if given --stdin), or uses curl to receive events from
 a systemd-journal-gatewayd instance (with --url=). So it can be used
 a server, or as a standalone binary.

What precisely does --listen= speak?

My intention was to speak only HTTP for all of this, so that we can
nicely work through firewalls.

 Messages must be in the export format. They are parsed and stored
 into a journal file. The journal file is /var/log/journal/external-*.journal
 by default, but this can be overridden by commandline options
 (--output).

Sounds good!

I think it would make sense to drop things into
/var/log/journal/hostname/*.journal by default. The hostname would
have to be determined from the URL the user specified on the command
line. Ideally we'd use the machine ID here, but since the machine ID is
hardly something the user should specify on the command line (and we
cannot just take the machine ID supplied form the other side, because we
probably should not trust that and hence allow it to tell us to
overwrite another hosts' data), the hostname is the next best
thing. Currently libsystemd-journald will ignore directories that are
not machine IDs when browsing, but we could easily drop that limitation.

 Push mode is not implemented... (but it would be a separate program
 anyway).

My intention was actually to keep this in the same tool. So that we'd
have for input and output:

A) HTTP GET
B) HTTP POST
C) SSH PULL (would invoke journalctl -o export via ssh)
D) SSH PUSH (would invoke systemd-journald-remote via ssh)
E) A directory for direct read access (which would allows us to merge 
multiplefile into one with this tool)
F) A directory for direct write access (which is of course the default)

We should always require that either E or F is used, but in any
combination with any of the others.

 Examples:
   journalctl -o export | systemd-journal-remoted --stdin -o /tmp/dir/

Sounds pretty cool. Pretty close to what I'd have in mind.

To make this even shorter I'd suggest though that we take two normal
args for source and dest, and that - is used as stdin/stdout
respectively, and the dest can be ommited:

Hence:
journalctl -o export | systemd-journal-remote - /tmp/dir
Or:
systemd-journal-remote http://some.host:19531/entries?boot
Or:
systemd-journal-remote http://some.host:19531/entries?boot /tmp/dir
Or:
systemd-journal-remote /var/log/journal /tmp/dir

And so on...


   remote-127.0.0.1~2000.journal
   remote-multiple.journal
   remote-stdin.journal
   remote-http~~~some~host~19531~entries.journal
 
 The goal was to have names containing the port number, so that it is
 possible to run multiple instances without conflict.

I'd always try to separate the base name out of a host spec. I.e. the
actual hostname of it. So that people can swap protocols as they
wish.

For example, i'd envision that people often begin with just pulling
things via SSH, but later on end up using HTTP more frequently, and
hence this should write to the same dir in /var/log/journal by default:

systemd-journal-remote lennart@somehost
systemd-journal-remote http://somehost:19531/entries?boot

Hmm, also, thinking about it I think we should only use the base URL
for the HTTP transport, and let the /entries?boot stuff be an
implementation detail we implicitly append.

 static int spawn_curl(char* url) {
 int r;
 char argv0[] = curl;
 char argv1[] = -HAccept: application/vnd.fdo.journal;
 char argv2[] = --silent;
 char argv3[] = --show-error;
 char* argv[] = {argv0, argv1, argv2, argv3, url, NULL};
 
 r = spawn_child(curl, argv);
 if (r  0)
 log_error(Failed to spawn curl: %m);
 return r;
 }

My intention here was to use libneon, which is quite OK as HTTP client
library, and includes proxy support, and TLS and whatnon. 

I am a bit conservative about pulling curl into this low level tool
(after all it includes a full gopher client!). I also want to be very
careful to only support HTTP, SSH and file as transports, and not any
random FTP or whatnot people might want to throw at this.

Otherwise looks pretty OK! Good work!

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC/PATCH] journal over the network

2012-11-19 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Nov 20, 2012 at 02:21:54AM +0100, Lennart Poettering wrote:
 On Mon, 19.11.12 01:21, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:
 
 Heya,
 
 I like your work!
Thanks :)

 
  The program (called systemd-journal-remoted now, but I'd be happy to
  hear suggestions for a better name) listens on sockets (either from
 
 Since this is also useful when run on the command line I'd really prefer
 to drop the d suffix, i.e. systemd-journal-remote sounds like a good
 name for it.
OK.

  socket activation, or specified on the command line with --listen=),
  or reads stdin (if given --stdin), or uses curl to receive events from
  a systemd-journal-gatewayd instance (with --url=). So it can be used
  a server, or as a standalone binary.
 
 What precisely does --listen= speak?
It just reads pure 'export' stream.

 My intention was to speak only HTTP for all of this, so that we can
 nicely work through firewalls.
Yeah, probably that's more useful than raw stream for normal purposes,
since it allows for authentication and whatnot.

  Messages must be in the export format. They are parsed and stored
  into a journal file. The journal file is /var/log/journal/external-*.journal
  by default, but this can be overridden by commandline options
  (--output).
 
 Sounds good!
 
 I think it would make sense to drop things into
 /var/log/journal/hostname/*.journal by default. The hostname would
 have to be determined from the URL the user specified on the command
 line. Ideally we'd use the machine ID here, but since the machine ID is
 hardly something the user should specify on the command line (and we
 cannot just take the machine ID supplied form the other side, because we
 probably should not trust that and hence allow it to tell us to
 overwrite another hosts' data), the hostname is the next best
 thing. Currently libsystemd-journald will ignore directories that are
 not machine IDs when browsing, but we could easily drop that limitation.
So it seems that this mapping (url/source/whatever - .journal path)
will require some thought.

I'd imagine, that people will want to use this most often as a syslogd
replacement, i.e. launch systemd-journal-remote on a central host, and
then let all other hosts stream messages live. In this case we know
only two things: _MACHINE_ID specified remotely, and the remote
IP:PORT and thus hostname. Actually, I thought that since all those
things are unreliable (IP only to some extent, but still), they
wouldn't be used to determine the output file, and all output would go
into one .journal.

I remember that samba does (did?) something like what you suggest, and
kept separate logs based on the information under control of the
connecting host. On a host connected to the internet this would lead
to hundreds of log files.

In addition, .journal files have a fairly big overhead: ~180kB for a
an empty file. This overhead might be unwanted if there are many
sources.

Maybe there's no one answer, and choices will have to be provided.

  Push mode is not implemented... (but it would be a separate program
  anyway).
 
 My intention was actually to keep this in the same tool. So that we'd
 have for input and output:
 
 A) HTTP GET
 B) HTTP POST
 C) SSH PULL (would invoke journalctl -o export via ssh)
 D) SSH PUSH (would invoke systemd-journald-remote via ssh)
 E) A directory for direct read access (which would allows us to merge 
 multiplefile into one with this tool)
 F) A directory for direct write access (which is of course the default)
Also useful:
B1) socket listen() without HTTP
B2) HTTPS POST (I'm assuming that POST means to listen)
E1) a specific file for read access
F1) a specific file for write access

B1, F, F1 are implemented; A is implemented but ugly (curl).
E and E1 would require pulling in journalctl functionality.

 We should always require that either E or F is used, but in any
 combination with any of the others.
I think it is useful to allow the output directory to be implicit
(e.g. /var/log/journal/hostname/remote.journal can be used).

  Examples:
journalctl -o export | systemd-journal-remoted --stdin -o /tmp/dir/
 
 Sounds pretty cool. Pretty close to what I'd have in mind.
 
 To make this even shorter I'd suggest though that we take two normal
 args for source and dest, and that - is used as stdin/stdout
 respectively, and the dest can be ommited:

It started this way during development, but I'm not so sure if it'll
be always clear what is meant:
B, B1, and B2 can also come from socket activation, thus not appearing on
the command line, but output might still be specified.
OTOH, there might be multiple sources, and the implicit output dir.
So I think that explicit --output/-o is better.
Sources as positional arguments might work, as long as they can
be distinguished.

 Hence:
 journalctl -o export | systemd-journal-remote - /tmp/dir
 Or:
 systemd-journal-remote http://some.host:19531/entries?boot
 Or:
 systemd-journal-remote 

[systemd-devel] [RFC/PATCH] journal over the network

2012-11-18 Thread Zbigniew Jędrzejewski-Szmek
Hi,

this is a stab at the remote journal logging functionality... Attached
is the body of the program, but full patch set is available under
   http://in.waw.pl/git/systemd/ journal-remoted

The program (called systemd-journal-remoted now, but I'd be happy to
hear suggestions for a better name) listens on sockets (either from
socket activation, or specified on the command line with --listen=),
or reads stdin (if given --stdin), or uses curl to receive events from
a systemd-journal-gatewayd instance (with --url=). So it can be used
a server, or as a standalone binary.

Messages must be in the export format. They are parsed and stored
into a journal file. The journal file is /var/log/journal/external-*.journal
by default, but this can be overridden by commandline options (--output).

Authentication and rate-limiting are not implemented...

Debugging messages are a bit excessive...

Push mode is not implemented... (but it would be a separate program
anyway).

Examples:
  journalctl -o export | systemd-journal-remoted --stdin -o /tmp/dir/
will create a copy of events, which can be browsed with
  journalctl -D /tmp/dir/

Copy messages from another host
  systemd-journal-remoted --url http://some.host:19531/entries?boot' -o 
/tmp/dir/

Copy messages from another host, live
  systemd-journal-remoted --url http://some.host:19531/entries?bootfollow' -o 
/tmp/dir/

Listen on socket:
  systemd-journal-remoted --listen 19532 -o /tmp/dir/

I think that the implementation is fairly sound, but some details
certainly can be improved. E.g. currently, file names look like
(underneath some directory):

  remote-127.0.0.1~2000.journal
  remote-multiple.journal
  remote-stdin.journal
  remote-http~~~some~host~19531~entries.journal

The goal was to have names containing the port number, so that it is
possible to run multiple instances without conflict.

Also, the memory allocation/deallocation patterns in get_line() are
fairly ugly. I'm not sure if this is significant at all.

Zbyszek
/*-*- Mode: C; c-basic-offset: 8; indent-tabs-mode: nil -*-*/

/***
  This file is part of systemd.

  Copyright 2012 Zbigniew Jędrzejewski-Szmek

  systemd is free software; you can redistribute it and/or modify it
  under the terms of the GNU Lesser General Public License as published by
  the Free Software Foundation; either version 2.1 of the License, or
  (at your option) any later version.

  systemd is distributed in the hope that it will be useful, but
  WITHOUT ANY WARRANTY; without even the implied warranty of
  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
  Lesser General Public License for more details.

  You should have received a copy of the GNU Lesser General Public License
  along with systemd; If not, see http://www.gnu.org/licenses/.
***/

#include errno.h
#include fcntl.h
#include inttypes.h
#include stdio.h
#include stdlib.h
#include string.h
#include sys/epoll.h
#include sys/prctl.h
#include sys/signalfd.h
#include sys/socket.h
#include sys/stat.h
#include sys/types.h
#include unistd.h
#include getopt.h

#include systemd/sd-daemon.h

#include journal-file.h
#include journald-native.h
#include journald-server.h
#include socket-util.h
#include mkdir.h
#include build.h
#include macro.h

#define REMOTE_JOURNAL_PATH /var/log/journal/ SD_ID128_FORMAT_STR /remote-%s.journal

static char* arg_output = NULL;
static char* arg_url = NULL;
static bool arg_stdin = false;
static char* arg_listen = NULL;
static int arg_compress = 1;
static int arg_seal = 0;

/**
 **
 **/

static int spawn_child(const char* child, char** argv) {
int fd[2];
pid_t parent_pid, child_pid;
int r;

if (pipe(fd)  0) {
log_error(Failed to create pager pipe: %m);
return -errno;
}

parent_pid = getpid();

child_pid = fork();
if (child_pid  0) {
r = -errno;
log_error(Failed to fork: %m);
close_pipe(fd);
return r;
}

/* In the child */
if (child_pid == 0) {
r = dup2(fd[1], STDOUT_FILENO);
if (r  0) {
log_error(Failed to dup pipe to stdout: %m);
_exit(EXIT_FAILURE);
}

r = close_pipe(fd);
if (r  0)
log_warning(Failed to close pipe fds: %m);

/* Make sure the child goes away when the parent dies */
if (prctl(PR_SET_PDEATHSIG, SIGTERM)  0)
_exit(EXIT_FAILURE);

/* Check whether our parent died before we were able
 * to set the death signal */
if (getppid() != parent_pid)
_exit(EXIT_SUCCESS);