Darren said:
> Right, that is a very important issue.  Would a
> ZFS "scrub" framework do copy on write ?
> As you point out if it doesn't then we still need
> to do something about the old clear text blocks
> because strings(1) over the raw disk will show them.
> 
> I see the desire to have a knob that says "make this 
> encrypted now" but I personally believe that it is
> actually better if you can make this choice at the
> time you create the ZFS data set.

I'm not sure that that gets rid of the problem at all.

If I have an existing filesystem that I want to encrypt, but I need to
create a new dataset to do so, I'm going to create my new, encrypted
dataset, then copy my data onto it, then (maybe) delete the old one.

If both datasets are in the same pool (which is likely), I'll still not
be able to securely erase the blocks that have all my cleartext data on
them. The only way to do the job properly would to overwrite the entire
pool, which is likely to be pretty inconvenient in most cases.

So, how about some way to securely erase freed blocks?

It could be implemented as a one-off operation that acts on an entire
pool e.g.
    zfs shred tank
which would walk the free block list and overwrite with random data some
number of times.
Or it might be more useful to have it as a per-dataset option:
    zfs set shred=32 tank/secure
which could overwrite blocks with random data as they are freed.
I have no idea how expensive this might be (both in development time,
and in performance hit), but its use might be a bit wider than just
dealing with encryption and/or rekeying.

I guess that deletion of a snapshot might get a bit expensive, but maybe
there's some way that blocks awaiting shredding could be queued up and
dealt with at a lower priority...

Steve.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to