Rob van der Heij writes:
> On Tue, Mar 18, 2008 at 5:18 AM, Mark Post <[EMAIL PROTECTED]> wrote:
> The "normal" usage of Linux in /tmp is pretty limited, so I don't
> think I'd be scared about a few MBs there. But since those files
> probably remain in page cache while you need them, you do not win
> anything there.

Others have discussed a lot of the aspects of /tmp configuration in
this thread but I'll just point out that there is a much bigger win
with tmpfs beyond the "data in page cache" that would apply even
with "/tmp on a normal filesystem on a fast block device". Linux
internally models the whole filesystem hierarchy (directories,
sub-directories, files, etc) with its VFS layer and caches it in
the structures in its "dcache".

A normal filesystem has to take those internal structures and
record them into blocks (and read them from blocks) so that the
block layer can do the I/O. Directory contents have to be squished
into a format which can be used as metadata blocks and recorded on
the block device, as does inode data like "last access time" and so
on. tmpfs doesn't have to do any of that at all since it's just a
thin layer around the VFS. That reduces the path length for
filesystem operations from
    file op -> VFS -> fs -> block layer -> device driver (e.g. DIAG)
to
    file op -> VFS+mm
That's a particularly big win for metadata-intensive operations.

A surprising number of applications do indeed use /tmp (often
creating and immediately unlinking the file so you may not see them
around much) and I think there are meta-data heavy ones too although
my experience with those is out of date. Such apps do things like
extracting tar files to /tmp and then walk/read through the results.

I've just tried out an example: make a script which untars a tar file
with ~4000 files of about 10KB each (I used /etc) into a directory
and then does rm -rf on it. The only system I can do the test on at
the moment is dreadful for proper measurement (tiny SLES10SP1 under
z/VM 4.4 as a capped guest under z/VM 5.x hence no DIAG either so
there's dasd_fba driver overhead there that wouldn't be present).
Running a couple of those test scripts in parallel, I get that tmpfs
is twice as fast as ext2 (mounted noatime) on VDISK (FBA not DIAG
though) with the CPU pegged at its cap of ~30% but that system setup
is so unusual it's probably not very useful. An internal throughput
test of a similar nature on a proper system would be interesting.

(Trying a different mail setup; let's see if it works.)

--Malcolm

--
Malcolm Beattie
System z SWG/STG, Europe
IBM UK

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to