-       A public interface to get the property state

That would come from libzfs.  There are private interfaces just now that
are very likely what you need zfs_prop_get()/zfs_prop_set(). They aren't
documented or public though and are subject to change at any time.

mmm, as the state of the compression flag may seriously affect media
consumption, this seems to be an important part of the meta data in case of a
backup. Is there no way to define an interface that will just evolve without
becoming ncompatible?

Compression doesn't really impact the tools consuming the POSIX layer interfaces if you look at the fields of struct stat. The POSIX layer (infact even 'zfs send') *always* sees the uncompressed data.

For example:

The ZFS filesystem /one has compression=on

stat /one/hamlet.txt /one/off/hamlet.txt
  File: `/one/hamlet.txt'
  Size: 211179          Blocks: 310        IO Block: 131072 regular file
Device: 2d900a9h/47775913d      Inode: 5           Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2010-03-25 09:25:25.456468675 +0000
Modify: 2010-03-25 09:25:25.489588537 +0000
Change: 2010-03-25 09:25:25.489588537 +0000

The ZFS filesystem /one/off has compression=off

  File: `/one/off/hamlet.txt'
  Size: 211179          Blocks: 517        IO Block: 131072 regular file
Device: 2d900aah/47775914d      Inode: 5           Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2010-03-25 09:25:45.946130734 +0000
Modify: 2010-03-25 09:25:45.946404528 +0000
Change: 2010-03-25 09:25:45.946404528 +0000


Note that while the number of blocks is much lower in the compression=on case the Size of the file is still the same. Since you have no way to read or write the compressed data it really shouldn't mater that the number of blocks on disk are different if you are using the POSIX layer interfaces (which is all you can do from userland anyway - and even all the kernel parts of 'zfs send' can do).

Now if you are doing something like multiplying out the number of blocks by the blocksize I can see where things can be a problem. However that would be a big problem with ZFS even if compression wasn't enabled because the block size isn't fixed (512 - 128k in powers of 2).


Now compare this with the same hamlet.txt file stored on UFS:

stat /mnt/hamlet.txt
  File: `/mnt/hamlet.txt'
  Size: 211179          Blocks: 432        IO Block: 8192   regular file
Device: 2d80003h/47710211d      Inode: 4           Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2010-03-25 09:27:34.109975000 +0000
Modify: 2010-03-25 09:27:34.110729000 +0000
Change: 2010-03-25 09:27:34.110729000 +0000


So maybe I'm missing what the issue for you is, if so can you try and explain it to me by using an example.

Thanks.


--
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to