james hughes wrote:

This is intended as a defense in depth measure and also a sufficiently good measure for the customers that don't need full compliance with NIST like requirements that need degausing or physical destruction.

Govt, finance, healthcare all require the NIST overwrite...

Jim,

Do customers who are in business of wiping out all the deleted
data need an ability to wipe out individual files or could they
live with a periodic barrier-type of operation "wipe-out all
the unallocated but previously used blocks"?

In other words, wouldn't it be sufficient, and maybe even more
practical, to keep deletion virtually unchanged, and introduce
a barrier-type operation (an explicit command, which could be
just an option of a disk scrubber) to wipe out all the deallocated
but previously instantiated blocks in a given pool (including
wiping out remapped sectors, possibly only in certain hardware configurations that allow for such an operation).

It seems to me that overwriting file data upon a delete or
copy-on-write update to a file would not only introduce a lot
of complexity to ZFS but also it is likely hurt performance
substantially.

-- Olaf


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to