Re: [systemd-devel] journald disk space usage
Thank you for the response. I was hoping that the metadata would compress better because it's almost identical between rows in my application. 99% of the rows are going to be from the same unit. On Tue, Feb 28, 2017 at 7:56 AM, Lennart Poetteringwrote: > > The journal generates substantially more data, simply because we > collect a lot of implicit metadata for each log even. This data is > usually not compressed (we only compress individually large fields, > and usually fields are not individuall large). The implicit metadata > means we roughly collect 10x as much data and store that away. ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] journald disk space usage
Hello, I have a Rails application that produces quite a bit of log output - about 500MB per day, maybe 3-4 million lines. Currently this is going into a normal file with daily rotation. I tried dumping this into journald via STDOUT so that I could see everything in one place. On a standard Google Cloud Platform instance, this used about 10% extra CPU. I was willing to live with that, but more of a problem was the rapid increase in storage used for the log. It was growing at about 10x the rate as a flat file for the 2 hours I ran the experiment. That is, after 2 hours, the usage reported by 'sudo journalctl --disk-usage' was over 400MB, which is not much less than I would normally see for an entire day's worth of logging. I am wondering if this is to be expected due to journald's extra functionality and complexity, or does this seem incorrect? I'm using systemd 229 on Ubuntu 16.04. Thank you, Bill Lipa ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] systemd-nspawn leaves leftovers in /tmp
This might be due to trying to use systemd-nspawn -x with a raw image inside the btrfs /var/lib/machines volume. It doesn't work in the sense that the container isn't ephemeral, but there's no error message either, and this leftover gets created. If I jump through elaborate hoops to create the container as a btrfs subvolume instead of using the pull-raw one liner, the -x flag works as expected and there is no leftover in /tmp. On Thu, Nov 3, 2016 at 11:54 AM, Lennart Poettering <lenn...@poettering.net> wrote: > On Thu, 03.11.16 11:34, Bill Lipa (d...@masterleep.com) wrote: > >>> I am using systemd-nspawn to run a short lived process in a container. >> This is a fairly frequent operation (once every few seconds). Each >> time systemd-nspawn runs, it leaves a temporary empty directory like >> /tmp/nspawn-root-CPeQjR. These directories don't seem to get cleaned >> up. > > Generally, temporary files like this should not be left around by > commands that exit cleanly. If they do, then that's a bug, please file > a bug. (but first, please retry on the two most current systemd > versions, we only track issues with those upstream). ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] systemd-nspawn leaves leftovers in /tmp
Hello, I am using systemd-nspawn to run a short lived process in a container. This is a fairly frequent operation (once every few seconds). Each time systemd-nspawn runs, it leaves a temporary empty directory like /tmp/nspawn-root-CPeQjR. These directories don't seem to get cleaned up. I'm using systemd 229 on Ubuntu 16.04. The command line looks like: sudo systemd-nspawn -axq --private-network --drop-capability=all -u user --chdir /home/user/work -M ubuntu-16.10 --bind /home/outer/work:/home/user/work I'm a little worried that there are going to be hundreds of thousands of these directories clogging up /tmp after a few weeks. Is this expected? Thank you! Bill Lipa ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel