Deleting the dedup'ed data won't work better, since ZFS will have to process it 
quite the same way as if you're destroying a ZFS volume.

The only thing that really cuts through such a dataset is to destroy the 
underlying zpool. So maybe it would have been better to zfs send/recc all the 
data from that zpool and then do destroy the pool entirely.

On the other hand, there're quite a number of calucations that'll give you a 
good guess about how much RAM you will need for a specific amount of dedup'ed 
data.

cheers,
budy
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to