Date: Mon, 15 Jun 2020 22:34:01 +0200 From: Joerg Sonnenberger <jo...@bec.de> Message-ID: <20200615203401.ga91...@bec.de>
| > Running it under ktrace(1) shows it doing a stat(2) for every metadata | > file in the tree. The machine sounds like it is hitting the disk for | > every one. Is there any kind of cache for the attribute information | > that stat needs ? There is, but like all caches, it only works for the 2nd and later references, the first time through nothing is cached. | Raise kern.maxvnodes? Unless it happens to be very small, I doubt that will help, or not much, I'd expect that generally cvs is roaming the tree, updating files, and moving on (largely - the prune step later is a bit of a redo) It may be that there's nothing that can really be done - it would be possible to pre-load the cache, (find topdir -size 0 -print >/dev/null) but whatever time might be saved in the later cvs is likely more than consumed by the find (which is also going to need to hit the disc for most inodes). The vnode cache caches only vnodes that are used, not others in the same disk block, but the buffer cache should be able to retain those blocks so that a later reference to one of the other inodes that was already read from the drive needn't cause a read again - provided that the buffer cache is big enough, so if there's anything worth trying to alter, I'd have expected it to be vm.bufcache vm.bufmem_lowater vm.bufmem_hiwater to try and make sure that all those inode containing blocks are still available when needed - ffs clusters inode numbers in a directory, when it can, precisely so that this kind of buffer caching will be more effective, so it isn't (or shouldn't be) necessary to keep everything for the duration of the update. kre