On Wed, 05 Sep 2012 12:54:51 -0500, Dale wrote:

> >>>>> I might also add, I see no speed improvements in putting portages
> >>>>> work directory on tmpfs.  I have tested this a few times and the
> >>>>> difference in compile times is just not there.  
> >>>> Probably because with 16GB everything stays cached anyway.  
> >>> I cleared the cache between the compiles.  This is the command I
> >>> use:
> >>>
> >>> echo 3 > /proc/sys/vm/drop_caches  
> >> But you are still using the RAM as disk cache during the emerge, the
> >> data doesn't stay around long enough to need to get written to disk
> >> with so much RAM for cache.  
> > Indeed. Try setting the mount to write-through to see the difference.

> When I run that command, it clears all the cache.  It is the same as if
> I rebooted.  Certainly you are not thinking that cache survives a
> reboot?

You clear the cache between the two emerge runs, not during them.

> If you are talking about ram on the drive itself, well, when it is on
> tmpfs, it is not on the drive to be cached.  That's the whole point of
> tmpfs is to get the slow drive out of the way.  By the way, there are
> others that ran tests with the same results.  It just doesn't speed up
> anything since drives are so much faster nowadays. 

Drives are still orders of magnitude slower than RAM, that's why using
swap is so slow. What appears to be happening here is that because
files are written and then read again in short succession, they are still
in the kernel's disk cache, so the speed of the disk is irrelevant. Bear
in mind that tmpfs is basically a cached disk without the disk, so you
are effectively comparing the same thing twice.


-- 
Neil Bothwick

Theory is when you know everything, but nothing works.
Reality is when everything works, but you don't know why.
However, usually theory and reality are mixed together :
Nothing works, and nobody knows why not.

Attachment: signature.asc
Description: PGP signature

Reply via email to