Hello Darren,

Thursday, March 29, 2007, 12:01:21 AM, you wrote:

DRSC> Adam,

DRSC> With the blog entry[1] you've made about gzip for ZFS, it raises
DRSC> a couple of questions...

DRSC> 1) It would appear that a ZFS filesystem can support files of
DRSC>    varying compression algorithm.  If a file is compressed using
DRSC>    method A but method B is now active, if I truncate the file
DRSC>    and rewrite it, is A or B used?

All new blocks will be written using B.
It also means that some block belonging to the same file can be
compressed with method A and some with method B (and other if
compression gained less that 12% won't be compressed at all - unless
it was changed in a code).


DRSC> 2) The question of whether or not to use bzip2 was raised in
DRSC>    the comment section of your blog.  How easy would it be to
DRSC>    implement a plugable (or more generic) interface between
DRSC>    ZFS and the compression algorithms it uses such that I
DRSC>    can modload a bzip2 compression LKM and tell ZFS to
DRSC>    use that?  I suspect that doing this will take extra work
DRSC>    from the Solaris side of things too...

LKM - Linux Kernel Module? :))))))

Anyway - first problem is to find in-kernel compress/decompress
algorithms or port user-land to kernel. Gzip was easier as it already
was there. So if you have in-kernel bzip2 implementation, and better
yet working on Solaris, then adding bzip2 to ZFS would be quite easy.

Last time I looked in ZFS compression code it wasn't dynamically
expandable - available compression algorithms have to be compiled in.

Now while dynamically plugable implementation sounds appealing I doubt
people will actually create any such modules in reality. Not to
mentions problems like - you export a pool, import on another host
without your module and basically you can't access your data.


DRSC> 3) Given (1), are there any thoughts about being able to specify
DRSC>    different compression algorithms for different directories
DRSC>    (or files) on a ZFS filesystem?

There was small discussion here some time ago about such
possibilities but I doubt anything was actually be done about it.

Despite that 12% barrier which possibly saves CPU on decompression
with poorly compressed data (as such data won't actually be
compressed) I'm afraid there's nothing more.

It was suggested here that perhaps ZFS could turn compression off for
specific file types determined either on file name extension or its
magic cookie - that would probably save some CPU on some workloads.



-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to