Hakan Bayındır <ha...@bayindir.org> writes:
> Dear Russ,

>> If you are running a long-running task that produces data that you
>> care about, make a directory for it to use, whether in your home
>> directory, /opt, /srv, whatever.

> Sorry but, clusters, batch systems and other automated systems doesn't
> work that way.

Yours might not, but I spent 20 years maintaining clusters and batch
systems and I assure you that's how mine worked.

> That's not an extension of the home directory in any way. After users
> submit their jobs to the cluster, they neither have access to the
> execution node, nor they can pick and choose where to put their files.

> These files may stay there up to a couple of weeks, and deleting
> everything periodically will probably corrupt the jobs of these users
> somehow.

Using /var/tmp for this purpose is not a good design decision.
Directories are free; they can make a new one and point the files of batch
jobs there.  They don't have to overload a directory that historically has
different semantics and is often periodically cleared.  I get that this
may not be your design or something you have control over, so telling you
this doesn't directly help, but the point still stands.

Again, obviously the people configuring that cluster can configure it
however they want, including overriding the /var/tmp cleanup policy.  But
they're playing with fire by training users to use /var/tmp, and it's
going to result in someone getting their data deleted at some point,
regardless of what Debian does.

-- 
Russ Allbery (r...@debian.org)              <https://www.eyrie.org/~eagle/>

Reply via email to