Its true that there are lots of failed attempts to get files, which is 
normal for any system where you have a multiple directories which can 
contain any given file:

[vbraun@volker-desktop ~]$ echo quit | strace -f sage |& grep ENOENT | wc
  24322  298143 3532797

But, really, that just means that once the directory content is read into 
the filesystem cache and then then queries are answered from memory. 
Stat'ing ~25k files is pretty much instantaneous:

[vbraun@volker-desktop sage]$ time find | wc
  24238   24238 1510223

real 0m0.059s
user 0m0.050s
sys 0m0.035s

I'm pretty sure that we won't be able to beat the FS cache with a python 
cache. 

This might be different if the cache is cold and the system uses mechanical 
harddrives, but then it'll still take a long time to read the ~2k files in 
the Sage library:

[vbraun@volker-desktop sage]$ echo quit | strace -f sage |& grep '.py"' | 
grep -v ENOENT | wc
   2097   12884  261674


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to