On 9 April 2012, at 11:23, Canek Peláez Valdés wrote:
> … 
>       The `c', 's',  and `u' attributes are not honored by the ext2
> and ext3 filesystems as implemented in the current mainline Linux
> kernels. … 
> 
> This means ext4 mandatory if you want to use it, and this (usually)
> means GRUB2, which is still considered beta.

# eix -Ic grub
[I] sys-boot/grub (0.97-r10@03/07/12): GNU GRUB boot loader
# df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
rootfs         rootfs    228G  5.8G  211G   3% /
/dev/root      ext4      228G  5.8G  211G   3% /
devtmpfs       devtmpfs  875M  212K  875M   1% /dev
rc-svcdir      tmpfs     1.0M   60K  964K   6% /lib64/rc/init.d
cgroup_root    tmpfs      10M     0   10M   0% /sys/fs/cgroup
shm            tmpfs     876M     0  876M   0% /dev/shm
# 

> Also, I don't see
> anywhere any mention on the compress algorithm used, which will
> probably mean bzip or gzip; I really don't think they use something
> like xz, for example, although maybe it's possible.

I was guessing LZMA - I don't believe it's the highest-compression algorithm 
out there, but it seems to have been around for a while. I'm pretty sure I've 
seen it in menuconfig's kernel options somewhere.

> Even more, it has
> to have some performance hit, specially on large files;

Sure, but that applies to all file compression.

> >and lastly,
> with the current harddrive size/price ratio, the option of
> automatically compress/decompress files is not really that necessary
> (remember DoubleSpace, from DOS days?).

Yeah, in my case I've got about 1TB consumed by dd disk images which I've had 
to copy and unpack so I can mount them as loopback. 

These images are of the "installed gentoo, got it booting, zero'd over the free 
space" and factory-installation of Linpus types - i.e 5GB to 20GB when 
compressed. But the files are so huge because the system has this big empty 
250GB or 500GB hard-drive installed from the factory, and I'm just dd'ing the 
whole thing, because who cares for more elegant methods when you're bzipping 
them, anyway, and you can just leave the copy running overnight? So this is a 
present issue for me right now. 

> Oh, and I'm not sure about the following, but a *lot* of Unix programs
> use the same trick to write atomically files: They write the new
> version to a new file, and if that is successful, it moves the new
> file over the old one. In this case, I don't know how the new file can
> be created with the 'c' attribute (unless you set all the files to use
> it, and then for sure the performance hit will cost you).

Yeah, I got the notion from `man chattr` ("with the `c' attribute set … A read 
from this file returns uncompressed data.  A write to this file compresses data 
before storing them on the disk.") that one might be initiate the dd copying 
process to a new file (which would write uncompressed data) then, in a new 
terminal, immediately change the compression attribute of the file, and that 
the *remainder* of the file would be compressed.

I was saving this speculation to wait hopefully for a reply from someone who 
has actually used this feature.

Stroller.


Reply via email to