Hi,

Quoting Holger Levsen (2024-05-07 17:22:48)
> On Tue, May 07, 2024 at 04:24:06PM +0300, Hakan Bayındır wrote:
> > Consider a long running task, which will take days or weeks (which is the
> > norm in simulation and science domains in general). System emitted a warning
> > after three days, that it'll delete my files in three days. My job won't be
> > finished, and I'll be losing three days of work unless I catch that warning.
> Then it will be high time you learn not to abuse /tmp that way and work in
> your (or your services) home/data directory.
> 
> Problem easily avoided. plus you don't need to make /tmp 20 TB because you
> have lots of data. ;)
> 
> I'm a bit surprised how many people seem to really rely on data in /tmp to
> survive for weeks or even months. I wonder if they backup /tmp?

I like using /tmp because it's a tmpfs which makes some things faster. For
quite a few things I do not want them to be stored long-term on my SSD so I
resort to using /tmp and not the directory I called ~/tmp inside my $HOME.

This is also not only about data surviving for weeks and months. Elsewhere in
this thread i mentioned mmdebstrap as an application which creates files in
/tmp which have a modification time far in the past. The same happens when
using other tools, for example lets say I want to have a small scratch space
into which I wget some files:

    $ wget https://www.debian.org/Pics/debian-logo-1024x576.png
    $ stat -c %y debian-logo-1024x576.png
    2020-12-17 10:59:08.000000000 +0100

Will this mean that debian-logo-1024x576.png might accidentally get cleaned up
unless I disable that mechanism? The problem is not limited to people with a
crazy large /tmp either. My system has 3.7 GB of ram and having /tmp be a tmpfs
(even though it's very small) is still beneficial for me because my maximum
read speed from my SSD is 140 MB/s. My small RAM is much faster than that.

Thanks!

cheers, josch

Attachment: signature.asc
Description: signature

Reply via email to