On Thu, Apr 28, 2011 at 3:50 PM, Edward Ned Harvey
<opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> When a block is scheduled to be written, system performs checksum, and looks
> for a matching entry in DDT in ARC/L2ARC.  In the event of an ARC/L2ARC

... which, if it's on L2ARC, is another read too. While most people
will be using a fast SSD, it's slower than RAM and still worth
mentioning.

> cache miss for a DDT entry which actually exists, the system will need to
> perform a number of small disk reads in order to fetch the DDT entry from
> disk.  Correct?  I figure at least one, probably more than one, read to
> locate the entry on disk, and then another read to actually read the entry.

I think it's safe to assume it'll usually be multiple reads from the
pool devices. These are random iops.

> After this, the system knows there is a checksum match between the block
> waiting to be written, and another block that's already on disk, and it
> could possibly have to do yet another read for verification, before it is
> able to finally do the write.  Right?

If verify is on, it'll read the on-disk block and compare it to the
to-be-written block. If they match, it will increment the refcount for
the on-disk block.

If the zpool property dedupditto is set and the refcount for the
on-disk block exceeds the threshold, it will write another copy of the
block to disk.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to