On 27/01/2010 09:44, Björn JACKE wrote:
On 2010-01-25 at 08:31 -0600 Mike Gerdts sent off:
You are missing the point.  Compression and dedup will make it so that
the blocks in the devices are not overwritten with zeroes.  The goal
is to overwrite the blocks so that a back-end storage device or
back-end virtualization platform can recognize that the blocks are not
in use and as such can reclaim the space.

a filesystem that is able to do that fast would have to implement something
like unwritten extents. Some days ago I experimented to create and allocate
huge files on ZFS ontop of OpenSolaris using fnctl and F_ALLOCSP which is
basically the same thing that you want to do when you zero out space. It takes
ages because it actually writes zeroes to the disk. A filesystem that knows the
concept of unwritten extents finishes the job immediately. There are no real
zeros on the disk but the extent is tagged to be unwritten (you get zeros when
you read it).

I don't see how that will help in this case.

In this case it isn't what the filesystem (ZFS) shows someone read(2)ing from it but what is actually on the block device and thus what is seen by the block device driver "on the other side".

Unless the block device driver on the other side (which might be on the other end of an iSCSI or FCoE "link") knows about this tagging system I don't see how that helps.

The whole point of the original question wasn't about consumers of ZFS but where ZFS is the consumer of block storage provided by something else that expects to see "zeros" on disk.

This thread is about "thin" provisioning *to* ZFS not *on* it.

If I'm missing something can you provide some pointers to documents I can read up on the scheme you suggested so I can see how it would work.

--
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to