Stuart Anderson wrote:
> On Wed, Apr 16, 2008 at 10:09:00AM -0700, Richard Elling wrote:
>   
>> Stuart Anderson wrote:
>>     
>>> On Tue, Apr 15, 2008 at 03:51:17PM -0700, Richard Elling wrote:
>>>  
>>>       
>>>> UTSL.  compressratio is the ratio of uncompressed bytes to compressed 
>>>> bytes.
>>>> http://cvs.opensolaris.org/source/search?q=ZFS_PROP_COMPRESSRATIO&defs=&refs=&path=zfs&hist=&project=%2Fonnv
>>>>
>>>> IMHO, you will (almost) never get the same number looking at bytes as you
>>>> get from counting blocks.
>>>>    
>>>>         
>>> If I can't use /bin/ls to get an accurate measure of the number of 
>>> compressed
>>> blocks used (-s) and the original number of uncompressed bytes (-l). What 
>>> is
>>> a more accurate way to measure these?
>>>  
>>>       
>> ls -s should give the proper number of blocks used.
>> ls -l should give the proper file length.
>> Do not assume that compressed data in a block consumes the whole block.
>>     
>
> Not even on a pristine ZFS filesystem where just one file has been created?
>   

In theory, yes.  Blocks are compressed, not files.

>   
>>> As a gedankan experiment, what command(s) can I run to examine a compressed
>>> ZFS filesystem and determine how much space it will require to replicate
>>> to an uncompressed ZFS filesystem? I can add up the file sizes, e.g.,
>>> /bin/ls -lR | grep ^- | nawk '{SUM+=$5}END{print SUM}'
>>> but I would have thought there was a more efficient way using the already
>>> aggregated filesystem metadata via "/bin/df" or "zfs list" and the
>>> compressratio.
>>>  
>>>       
>> IMHO, this is a by-product of the dynamic nature of ZFS.
>>     
>
> Are you saying it can't be done except by adding up all the individual
> file sizes?
>   

I'm saying that adding up all of the individual files sizes, rounded up
to the smallest block size for the target file system, plus some estimate
of metadata space requirements, will be the most pessimistic estimate.
Metadata is also compressed and copied, by default.

>> Personally, I'd estimate using du rather than ls.
>>     
>
> They report the exact same number as far as I can tell. With the caveat
> that Solaris ls -s returns the number of 512-byte blocks, whereas
> GNU ls -s returns the number of 1024byte blocks by default.
>
>   
That is file-system dependent.  Some file systems have larger blocks
and ls -s shows the size in blocks.  ZFS uses dynamic block sizes, but
you knew that already... :-)
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to