On Thu, Jan 12, 2012 at 7:23 PM, Stephen Montgomery-Smith <step...@missouri.edu> wrote: > On 01/12/2012 09:11 PM, Garrett Cooper wrote: > >> +1. And it's faster yet when you can run parallel copies of rm on >> different portions of the directory tree (e.g. xargs, find [..] -exec) >> as rm is O(n). > > > I have always wondered about that! I thought that the main bottleneck in > "rm -r" might be deleting directories which are not in the disk cache, which > then have to be copied from the disk. Copying two different parts of the > disk into cache - well it has to be done one at a time whether the jobs > asking for the copy of the disk are going concurrently or consecutively. > > And perhaps two instances of "rm -r" acting on different parts of the hard > drive will cause disk thrashing, so that they might even slow each other > down.
Not really. The problem is limited by the fact that rm(1) needs to dive into the kernel each time it does an unlink(1) syscall. Per Prof. McKusick's teaching and the way things are designed from libc to disk -- the process is artificially like: rm -> syscall -> top-half of the kernel -> vfs -> filesystem (in my case ZFS) -> geom -> scsi layer -> storage controller/disk driver. This is super expensive as n becomes large as the path is long, so the best to amortize the operation is to run more instances in parallel as there should be a relatively low chance of there being lock contention (assuming you're not consuming too many GIANT locks, you don't overprovision your system, etc). See the loop in .../bin/rm/rm.c:rm_tree(char**) if you're curious where things have issues. > But this is all guess work on my part. > > If I am wrong, and "rm -r" does work faster when working in parallel on > different parts, Yes. Less modifications to directory entries -> less vfs locking contention and less useless writing back to disk -> better. > then why doesn't someone write the "rm" command to fork > copies of itself that work on different parts of large trees? Perhaps. The point is that my systems can do more work in parallel with a larger number of rm's in order to improve throughput. I learned this the hard way when I started deleting a prepopulated directory with pjd's fstest: it took hours to prune directory entries the O(n) way by a small fraction (<10% of 4 mil. or 40 mil. files). When I did stuff in parallel (have 8 regular cores, 8 SMT cores), the process took less than an hour to complete. Thanks, -Garrett _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"