> Eric Anderson wrote: > > > It could possibly be bad if you have a real file (say a 10GB file, > > partially filled with zeros - a disk image created with dd for > > instance), and you use cp with something like -spR to recursively copy > > all files. Your destination disk image would then be a sparse file, so > > Incidentally, this is exactly why I've needed it - I like to create disk > images for virtual machines as sparse files, when I know they won't be > much filled, but need the "virtual" space :)
[Basic idea from the plan 9 guys] Rather than modify every tool for this may be the OS should avoid writing a block of zeroes? The idea is to check if the first and last word are 0. If so, check the whole block (you can avoid the necessary bounds check by temporarily unzeroing the last word). If all zeroes and there is no existing allocation for the range being written, just advance the current offset (essentially lseek). Else proceed as normal write. This test is very cheap in practice and if you can avoid one write in ten thousand this way you will likely see overall savings. Of course, you still need rsync but it help all local copying, the common case. This being an optimization you don't need to implement a complete solution. For instance writing 1 zero byte in one call and then 4095 zero bytes in another may defeat the optimization. _______________________________________________ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "[EMAIL PROTECTED]"