On Wed, Apr 16, 2008 at 02:07:53PM -0700, Richard Elling wrote:
> 
> >>Personally, I'd estimate using du rather than ls.
> >>    
> >
> >They report the exact same number as far as I can tell. With the caveat
> >that Solaris ls -s returns the number of 512-byte blocks, whereas
> >GNU ls -s returns the number of 1024byte blocks by default.
> >
> >  
> That is file-system dependent.  Some file systems have larger blocks
> and ls -s shows the size in blocks.  ZFS uses dynamic block sizes, but
> you knew that already... :-)
> -- richard
> 

OK, we are now clearly exposing my ignorance, so hopefully I can learn
something new about ZFS.

What is the distinction/relationship between recordsize (which as
I understand is a fixed quantity for each ZFS dataset) and dynamic
block sizes?  Are blocks what are allocated for metadata, and records
what are allocated for data, i.e., the contents of files?

What does it mean that blocks are compressed for a ZFS dataset with
"compression=off"? Is this equivalent to saying that ZFS metadata is
always compressed?

Is there any ZFS documentation that shows by example exactly how to
interpret the the various numbers from ls, du, df, and zfs used/refernced/
available/compressratio in the context of compression={on,off}, possibly
also refering to both sparse and non-sparse files?

Thanks.


-- 
Stuart Anderson  [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to