On Sat, Feb 6, 2010 at 5:22 PM, Frank Middleton
<f.middleton at apogeect.com> wrote:
> On 02/ 5/10 05:16 PM, Dave Miner wrote:
>
>>> On 02/ 5/10 03:05 AM, sanjay nadkarni (Laptop) wrote:
>>>>
>>>> The following directories will be symlinked to datasets and
>>>> compression (lzjb) will be enabled on those directories.
>>>> /var/cores -> /var/shared/cores
>>>> /var/spool -> /var/shared/spool
>>>> /var/tmp -> /var/shared/tmp
>
>> What applications are you aware of that benefit from /var/tmp being a
>> tmpfs?
>
> One example is a code generator that writes a huge number of temporary
> files to /var/tmp. With UFS, remapping /var/tmp resulted in a hundred-
> fold improvement in performance.

/tmp is already tmpfs.  Would setting TMPDIR=/tmp in the programs
environment yield similar results?

> Perhaps ZFS would never actually write
> these files, so maybe this wouldn't be an issue with ZFS though. From
> what I've seen, /var/tmp typically has a lot of small files in it anyway,
> many of them zero length.

And on the other end of the spectrum are files that are huge that have
been sitting there for years with no one ever cleaning them up.  I
think that there is a compelling case for making /var/tmp its own file
system (rpool/var/tmp) with a quota on it (to limit overall size) or
perhaps user quotas (to discourage irresponsible use).  Those that
argue for a separate /var to get noexec,nodevices may like to set
those properties on /var/tmp and not worry so much about a separate
/var.

> Would compressing /var/tmp result in any meaningful
> savings in space vs. the overhead of trying to compress it? I guess I'm just
> skeptical that there's much benefit to compressing these particular three
> directories. Are application core files particularly amenable to ZFS style
> compression (gzip on a random core does seem to be able to get to 25%)?

Kernel dumps stored in a dump device are compressed.  Last time I
looked, these crash dumps had a compress ratios in the 3.5x to 7x
range.

http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg17235.html

>From an application perspective, I used gcore to get a core file from
a running acroread and copied it into different zfs file systems.

mkfile -n 500m /var/tmp/500m
zpool create junk /var/tmp/500m
cd /junk
zfs create junk/uncompressed
zfs create -o compression=on junk/compressed
zfs create -o compression=gzip junk/gzip
cd /junk/uncompressed/
gcore 4587
cp core.4587 ../compressed
cp core.4587 ../gzip/

# du -h  /junk/*/core.4587
 137M   /junk/compressed/core.4587
  81M   /junk/gzip/core.4587
 218M   /junk/uncompressed/core.4587

# zfs get compressratio junk/compressed junk/gzip
NAME             PROPERTY       VALUE  SOURCE
junk/compressed  compressratio  1.56x  -
junk/gzip        compressratio  2.62x  -

If files are zero length (or smaller than the smallest available block
size?) no cycles will be wasted on compression.  I've seen very few
machines these days that are short on CPU cycles.  When tens to
hundreds of virtual machines (or zones) are all on the same machine,
disk space can still be tight bolstering the case for compression.  A
desktop with a terabyte hard drive dedicated to OpenSolaris will see
no real space benefit from compression.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/

Reply via email to