On Mon, Apr 9, 2012 at 5:06 AM, Stroller <strol...@stellar.eclipse.co.uk> wrote: > > On 8 April 2012, at 19:21, Canek Peláez Valdés wrote: >> … >> And (optionally) convert all the files and directories to use extends: >> >> find <directory> -xdev -type f -print0 | xargs -0 chattr +e >> find <directory> -xdev -type d -print0 | xargs -0 chattr +e > > Ok, so I was just casually reading the chattr manpage, following this post… > > The letters `acdeijstuADST' select the new attributes for the files: > append only (a), compressed (c), … > > A file with the `c' attribute set is automatically compressed on the > disk by the kernel. A read from this file returns uncompressed data. > A write to this file compresses data before storing them on the disk. > > COMPRESSED?!?! > > You mean, all I need to do is `touch new.dd.img && chattr +c new.dd.img && dd > if=/dev/sdX of=new.dd.img` and I never again need to worry about piping dd > through bzip and bunzip? > > If I have a massive great big uncompressed dd image, I can compress it as > simply as touching a new file, changing this attribute on the new file and > copying it over? > > Is there a reason I've been unaware of this? Why isn't this hugely popular?
>From the same man page: BUGS AND LIMITATIONS The `c', 's', and `u' attributes are not honored by the ext2 and ext3 filesystems as implemented in the current mainline Linux kernels. These attributes may be implemented in future versions of the ext2 and ext3 filesystems. This means ext4 mandatory if you want to use it, and this (usually) means GRUB2, which is still considered beta. Also, I don't see anywhere any mention on the compress algorithm used, which will probably mean bzip or gzip; I really don't think they use something like xz, for example, although maybe it's possible. Even more, it has to have some performance hit, specially on large files; and lastly, with the current harddrive size/price ratio, the option of automatically compress/decompress files is not really that necessary (remember DoubleSpace, from DOS days?). Oh, and I'm not sure about the following, but a *lot* of Unix programs use the same trick to write atomically files: They write the new version to a new file, and if that is successful, it moves the new file over the old one. In this case, I don't know how the new file can be created with the 'c' attribute (unless you set all the files to use it, and then for sure the performance hit will cost you). Anyhow, it's an incredible cool feature; I just don't know how useful it is with the size of modern disks. Regards. -- Canek Peláez Valdés Posgrado en Ciencia e Ingeniería de la Computación Universidad Nacional Autónoma de México