On Tue, 23.10.12 22:02, Ciprian Dorin Craciun (ciprian.crac...@gmail.com) wrote:
> > On Tue, Oct 23, 2012 at 9:40 PM, Lennart Poettering > <lenn...@poettering.net> wrote: > > But note that the price you pay for interleaving files on display grows > > with the more you split things up (O(n) being n number of files to > > interleave), hence we are a bit conservative here, we don't want to push > > people towards splitting up things too much, unless they have a really > > good reason to. > > By "interleaving" I guess you mean: when querying for logs the > system will have to open all files and read from them at the same time > to give the impression of a "merged" log, sorted by timestamp (or a > similar key). Yes, turning a number of fragments in various files into one stream of monotonically increasing timestamps. > > BTW, are you sure you actually need processes to split up by? Wouldn't > > services be more appropriate? > > When I say "processes" I actually mean: a couple of processes > acting together as an integral logical unit. (Like PostgreSQL which > has multiple processes which behave as one group.) Yeah, on systemd that's called a service, and is implemented as as a cgroup on the lower layers. The journal automatically indexes by service. Try "journalctl -u avahi-daemon.service" to get all messages from avahi, and avahi only. > And the way I see benefiting from systemd would be creating > containers (like LXC) for each such "process". Our story regarding containers (i.e. where a new PID 1 in the container is running on a host system) is that we suggest that each container runs its own journald instance, and generates is own files, but registers that in the host via symlinks in /var/log/journal. See http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface for more info about that. That way "journalctl -m" on the host will show you all logs from all containers, nicely interleaved. > >> * having the clustering key as a parameter for querying to > >> restrict index search, etc. > > > > Not sure I grok this. > > By "cluster key" I mean a special key that would direct the entry > to one log file or another. In the "normal" case such a "cluster key" > would be the login user name, etc. (This would also allow events from > the same source endup in different log files based on this "key".) > > In one word: a way to partition entries into multiple log files, > by setting this special field. As mentioned we have SplitMode= for this, but it is strictly for UIDs only, since we only need this for access control management, nothing else. Why precisely do you want to split up your log files per-service? That's the bit I don't get. Lennart -- Lennart Poettering - Red Hat, Inc. _______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel